[jira] [Updated] (JCR-3239) Removal of a top level node doesn't update the hierarchy manager's cache.

2014-05-21 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3239:
---

Priority: Major  (was: Minor)

 Removal of a top level node doesn't update the hierarchy manager's cache.
 -

 Key: JCR-3239
 URL: https://issues.apache.org/jira/browse/JCR-3239
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.11, 2.4
Reporter: Philipp Bärfuss

 *Problem*
 Scenario in which I encounter the problem:
 - given a node 'test' under root (/test)
 - re-imported the node after a deletion (all in-session operations) 
 {code}
 session.removeItem(/test)
 session.getImportXML(/, stream, 
 ImportUUIDBehavior.IMPORT_UUID_COLLISION_THROW)
 {code}
 Result: throws an ItemExistsException
 If the same operations are executed deeper in the hierarchy (for instance 
 /foo/bar) then the code works perfectly fine.
 *Findings*
 - session.removeItem informs the hierachy manager (via listener)
 -- CachingHierarchyManager.nodeRemoved(NodeState, Name, int, NodeId)
 - but the root node (passed as state) is not in the cache and hence the entry 
 of the top level node is not removed
 -- CachingHierarchyManager: 458 
 - while trying to import the method SessionImporter.startNode(NodeInfo, 
 ListPropInfo) calls session.getHierarchyManager().getName(id) (line 400)
 - the stall information causes a uuid collision (the code expects an 
 exception if the node doesn't exist but in this case it returns the name of 
 the formerly removed node)
 Note: session.itemExists() and session.getNode() work as expected (the former 
 returns false, the later throws an ItemNotFound exception)
 Note: I know that a different import behavior (replace existing) would solve 
 the issue but I can't be 100% sure that the UUID match so I favor collision 
 throw in my case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3770:
---

Fix Version/s: 2.7.6

 refine validateHierarchy check in order to avoid false-positives
 

 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 2.7.6


 if a node is deleted and re-added with the same nodeId (within the same 
 changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
 {code}
 Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of 
 node with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created JCR-3770:
--

 Summary: refine validateHierarchy check in order to avoid 
false-positives
 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg


if a node is deleted and re-added with the same nodeId (within the same 
changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
{code}
Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of node 
with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
at 
org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
at 
org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
...
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13969585#comment-13969585
 ] 

Stefan Guggisberg commented on JCR-3770:


JCR-2598 introduced the optional validateHierarchy check

 refine validateHierarchy check in order to avoid false-positives
 

 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 if a node is deleted and re-added with the same nodeId (within the same 
 changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
 {code}
 Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of 
 node with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (JCR-3770) refine validateHierarchy check in order to avoid false-positives

2014-04-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3770.


Resolution: Fixed

fixed in svn revision 1587619

 refine validateHierarchy check in order to avoid false-positives
 

 Key: JCR-3770
 URL: https://issues.apache.org/jira/browse/JCR-3770
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 2.7.6


 if a node is deleted and re-added with the same nodeId (within the same 
 changeLog), validateHierarchy might mistakenly throw an exception, e.g. 
 {code}
 Caused by: org.apache.jackrabbit.core.state.ItemStateException: Parent of 
 node with id 37e8c22a-5fa7-4dd8-908e-c94249f3715a has been deleted
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateModified(SharedItemStateManager.java:1352)
 at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.validateHierarchy(SharedItemStateManager.java:1199)
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (JCR-3581) Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned JCR-3581:
--

Assignee: Stefan Guggisberg

 Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo 
 implementation - wrong bit mask value used  
 ---

 Key: JCR-3581
 URL: https://issues.apache.org/jira/browse/JCR-3581
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jcr2spi
Affects Versions: 2.6, 2.7
Reporter: Ate Douma
Assignee: Stefan Guggisberg
 Attachments: JCR-3581.patch


 The BitsetKey class encodes Names bitwise in one or more long values.
 For its Comparable.compareTo implementation, the long value(s) are compared 
 first by  32 shifting to compare the higher bits, and if that equals out 
 by comparing the lower 32 bits.
 The bug is in the latter part: instead of masking off the higher 32 bits 
 using  0x0L, the current code is masking the higher 48 bits using 
  0x0L, as shown in snippet below from the current compareTo 
 implementation:
 long w1 = adrbits.length ? bits[adr] : 0;
 long w2 = adro.bits.length ? o.bits[adr] : 0;
 if (w1 != w2) {
 // some signed arithmetic here
 long h1 = w1  32;
 long h2 = w2  32;
 if (h1 == h2) {
 h1 = w1  0x0L;
 h2 = w2  0x0L;
 }
 return Long.signum(h2 - h1);
 }
 As result of this error many possible keys cannot and will not be stored in 
 the BitsetENTCacheImpl private TreeSetKey sortedKeys: only one key for 
 every key with a value between 0x0L and 0x0L (for *each* long 
 field) will be stored.
 Note that such 'duplicate' keys however will be used and stored in the other 
 BitsetENTCacheImpl private HashMapKey, EffectiveNodeType aggregates.
 As far as I can tell this doesn't really 'break' the functionality, but can 
 lead to many redundant and unnecessary (re)creation of keys and entries in 
 the aggregates map.
 The fix of course is easy but I will also provide a patch file with fixes for 
 the two (largely duplicate?) BitsetENTCacheImpl implementations 
 (jackrabbit-core and jackrabbit-jcr2spi).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3581) Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3581.


   Resolution: Fixed
Fix Version/s: 2.7
   2.6.1

committed patch in svn r1478684

thanks, good catch!

 Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo 
 implementation - wrong bit mask value used  
 ---

 Key: JCR-3581
 URL: https://issues.apache.org/jira/browse/JCR-3581
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jcr2spi
Affects Versions: 2.6, 2.7
Reporter: Ate Douma
Assignee: Stefan Guggisberg
 Fix For: 2.6.1, 2.7

 Attachments: JCR-3581.patch


 The BitsetKey class encodes Names bitwise in one or more long values.
 For its Comparable.compareTo implementation, the long value(s) are compared 
 first by  32 shifting to compare the higher bits, and if that equals out 
 by comparing the lower 32 bits.
 The bug is in the latter part: instead of masking off the higher 32 bits 
 using  0x0L, the current code is masking the higher 48 bits using 
  0x0L, as shown in snippet below from the current compareTo 
 implementation:
 long w1 = adrbits.length ? bits[adr] : 0;
 long w2 = adro.bits.length ? o.bits[adr] : 0;
 if (w1 != w2) {
 // some signed arithmetic here
 long h1 = w1  32;
 long h2 = w2  32;
 if (h1 == h2) {
 h1 = w1  0x0L;
 h2 = w2  0x0L;
 }
 return Long.signum(h2 - h1);
 }
 As result of this error many possible keys cannot and will not be stored in 
 the BitsetENTCacheImpl private TreeSetKey sortedKeys: only one key for 
 every key with a value between 0x0L and 0x0L (for *each* long 
 field) will be stored.
 Note that such 'duplicate' keys however will be used and stored in the other 
 BitsetENTCacheImpl private HashMapKey, EffectiveNodeType aggregates.
 As far as I can tell this doesn't really 'break' the functionality, but can 
 lead to many redundant and unnecessary (re)creation of keys and entries in 
 the aggregates map.
 The fix of course is easy but I will also provide a patch file with fixes for 
 the two (largely duplicate?) BitsetENTCacheImpl implementations 
 (jackrabbit-core and jackrabbit-jcr2spi).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3581) Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo implementation - wrong bit mask value used

2013-05-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3581:
---

Fix Version/s: 2.4.4

 Incorrect bitwise arithmetic in BitsetENTCacheImpl.BitsetKey.compareTo 
 implementation - wrong bit mask value used  
 ---

 Key: JCR-3581
 URL: https://issues.apache.org/jira/browse/JCR-3581
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, jackrabbit-jcr2spi
Affects Versions: 2.6, 2.7
Reporter: Ate Douma
Assignee: Stefan Guggisberg
 Fix For: 2.4.4, 2.6.1, 2.7

 Attachments: JCR-3581.patch


 The BitsetKey class encodes Names bitwise in one or more long values.
 For its Comparable.compareTo implementation, the long value(s) are compared 
 first by  32 shifting to compare the higher bits, and if that equals out 
 by comparing the lower 32 bits.
 The bug is in the latter part: instead of masking off the higher 32 bits 
 using  0x0L, the current code is masking the higher 48 bits using 
  0x0L, as shown in snippet below from the current compareTo 
 implementation:
 long w1 = adrbits.length ? bits[adr] : 0;
 long w2 = adro.bits.length ? o.bits[adr] : 0;
 if (w1 != w2) {
 // some signed arithmetic here
 long h1 = w1  32;
 long h2 = w2  32;
 if (h1 == h2) {
 h1 = w1  0x0L;
 h2 = w2  0x0L;
 }
 return Long.signum(h2 - h1);
 }
 As result of this error many possible keys cannot and will not be stored in 
 the BitsetENTCacheImpl private TreeSetKey sortedKeys: only one key for 
 every key with a value between 0x0L and 0x0L (for *each* long 
 field) will be stored.
 Note that such 'duplicate' keys however will be used and stored in the other 
 BitsetENTCacheImpl private HashMapKey, EffectiveNodeType aggregates.
 As far as I can tell this doesn't really 'break' the functionality, but can 
 lead to many redundant and unnecessary (re)creation of keys and entries in 
 the aggregates map.
 The fix of course is easy but I will also provide a patch file with fixes for 
 the two (largely duplicate?) BitsetENTCacheImpl implementations 
 (jackrabbit-core and jackrabbit-jcr2spi).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3568) Property.getBinary().getStream() files in tempDir not removed by InputStream.close() nor by Binary.dispose()

2013-04-17 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3568:
---

Component/s: (was: jackrabbit-api)
 jackrabbit-webdav
   Priority: Major  (was: Blocker)

this is not a jackrabbit-core issue. 

i ran your test code against a local repository with the default configuration 
(created with new TransientRepository()).

i used test files with a total size of about 1gb.

there were no temp files created.

this might be a sling issue or a webdav-client or -server issue. 

to further narrow down the problem please run your test with
the following configurations:

- standalone repository accessed in-proc (not deployed in sling)
- standalone repository server accessed via webdav (not deployed in sling)

 




 Property.getBinary().getStream() files in tempDir not removed by 
 InputStream.close() nor by Binary.dispose() 
 -

 Key: JCR-3568
 URL: https://issues.apache.org/jira/browse/JCR-3568
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-webdav
Affects Versions: 2.6
 Environment: Windows 7 Pro, Java 6.0.39, WebDAV, JCR 2.0
Reporter: Ulrich Schmidt

 I need to inspect the the files stored to the jcr:data-Property in Node 
 jcr:content which is a subnode of a nt:fille-Node. Access mode is WebDAV 
 using JCR 2.0-API.
 Jackrabbit does not drop the tempfiles created by the command 
 Property.getBinary().getStream() by the closing instruchtions 
 InputStream.close() nor Binary.dispose(). I get a RepositoryException No 
 space left on device when der tempsace becomes full.
 The executed code is;
 public class DownloadLoopMain {
   private final static Logger LOGGER = 
 LoggerFactory.getLogger(Test.DownloadLoopMain);
   String repository = http://localhost:8080/server;;
   String user=admin;
   String password=admin;
   Session session;
   File temp = new File(System.getProperty(java.io.tmpdir));
   ListString nodeList = new ArrayListString();
   public DownloadLoopMain() throws Exception {
   LOGGER.info(TempDir= + temp.getPath());
   long totalsize=0;
   
   connectRepository();
   buildNodeList();
   ListString[] tempfiles = getTempFiles(temp.listFiles());
   LOGGER.info(Start with number of files in Tempdir: + 
 tempfiles.size());
   for (String node : nodeList) {  
   LOGGER.info(Retrieve node  + node);
   Node currentNode=session.getNode(node);
   Node fileNode = currentNode.getNode(jcr:content);
   Property jcrdata = fileNode.getProperty(jcr:data);
   Binary fileBin=jcrdata.getBinary();
   long filesize=fileBin.getSize();
   totalsize+=filesize;
   InputStream file = fileBin.getStream();
   
   LOGGER.info(Now we have number of files in Tempdir: + 
 tempfiles.size());  
   
   ListString[] newTempfiles = 
 getTempFiles(temp.listFiles());
   // Display new files in temp-directory
   compareTempfiles(new, newTempfiles, tempfiles);
   
   // Display files gone from temp-directory
   compareTempfiles(gone, tempfiles, newTempfiles);
   
   tempfiles=newTempfiles;
   
   file.close();
   fileBin.dispose();
   }
   }
   
   
   /**
* Compare List of tempfiles.
* @param intend
* @param list1
* @param list2
*/
   public void compareTempfiles(String intend, ListString[] list1, 
 ListString[] list2 ) {
   for (String[] list1file : list1) {
   boolean known=false;
   for (int i=0; i list2.size(); i++) {
   String[] list2file=list2.get(i);
   if (list1file[0].equals(list2file[0])) {
   known=true;
   break;
   }
   }
   if (!known) {
   LOGGER.info(intend +  tempfile= + 
 list1file[0]+   + list1file[1]);
   }
   }
   }
   public ListString[] getTempFiles(File[] files) {
   ListString[] filesList = new ArrayListString[]();
   for (File file : 

[jira] [Resolved] (JCR-3568) Property.getBinary().getStream() files in tempDir not removed by InputStream.close() nor by Binary.dispose()

2013-04-17 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3568.


Resolution: Invalid

 Property.getBinary().getStream() files in tempDir not removed by 
 InputStream.close() nor by Binary.dispose() 
 -

 Key: JCR-3568
 URL: https://issues.apache.org/jira/browse/JCR-3568
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-webdav
Affects Versions: 2.6
 Environment: Windows 7 Pro, Java 6.0.39, WebDAV, JCR 2.0
Reporter: Ulrich Schmidt

 I need to inspect the the files stored to the jcr:data-Property in Node 
 jcr:content which is a subnode of a nt:fille-Node. Access mode is WebDAV 
 using JCR 2.0-API.
 Jackrabbit does not drop the tempfiles created by the command 
 Property.getBinary().getStream() by the closing instruchtions 
 InputStream.close() nor Binary.dispose(). I get a RepositoryException No 
 space left on device when der tempsace becomes full.
 The executed code is;
 public class DownloadLoopMain {
   private final static Logger LOGGER = 
 LoggerFactory.getLogger(Test.DownloadLoopMain);
   String repository = http://localhost:8080/server;;
   String user=admin;
   String password=admin;
   Session session;
   File temp = new File(System.getProperty(java.io.tmpdir));
   ListString nodeList = new ArrayListString();
   public DownloadLoopMain() throws Exception {
   LOGGER.info(TempDir= + temp.getPath());
   long totalsize=0;
   
   connectRepository();
   buildNodeList();
   ListString[] tempfiles = getTempFiles(temp.listFiles());
   LOGGER.info(Start with number of files in Tempdir: + 
 tempfiles.size());
   for (String node : nodeList) {  
   LOGGER.info(Retrieve node  + node);
   Node currentNode=session.getNode(node);
   Node fileNode = currentNode.getNode(jcr:content);
   Property jcrdata = fileNode.getProperty(jcr:data);
   Binary fileBin=jcrdata.getBinary();
   long filesize=fileBin.getSize();
   totalsize+=filesize;
   InputStream file = fileBin.getStream();
   
   LOGGER.info(Now we have number of files in Tempdir: + 
 tempfiles.size());  
   
   ListString[] newTempfiles = 
 getTempFiles(temp.listFiles());
   // Display new files in temp-directory
   compareTempfiles(new, newTempfiles, tempfiles);
   
   // Display files gone from temp-directory
   compareTempfiles(gone, tempfiles, newTempfiles);
   
   tempfiles=newTempfiles;
   
   file.close();
   fileBin.dispose();
   }
   }
   
   
   /**
* Compare List of tempfiles.
* @param intend
* @param list1
* @param list2
*/
   public void compareTempfiles(String intend, ListString[] list1, 
 ListString[] list2 ) {
   for (String[] list1file : list1) {
   boolean known=false;
   for (int i=0; i list2.size(); i++) {
   String[] list2file=list2.get(i);
   if (list1file[0].equals(list2file[0])) {
   known=true;
   break;
   }
   }
   if (!known) {
   LOGGER.info(intend +  tempfile= + 
 list1file[0]+   + list1file[1]);
   }
   }
   }
   public ListString[] getTempFiles(File[] files) {
   ListString[] filesList = new ArrayListString[]();
   for (File file : files) {
   String[] filedesc = new String[2];
   filedesc[0]=file.getName();
   filedesc[1]=file.length()+;
   filesList.add(filedesc);
   }
   return filesList;
   }
   
   public void buildNodeList() throws IOException {
   String path =E:/Jackrabbit/logs/Populate-Files.log;
   File file = new File(path);
   BufferedReader br = new BufferedReader(new FileReader(file));
   String line;
   while ((line=br.readLine())!=null) {
   nodeList.add(line);
 

[jira] [Assigned] (JCR-3514) Error in RepositoryImpl class

2013-02-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned JCR-3514:
--

Assignee: Stefan Guggisberg

 Error in RepositoryImpl class
 -

 Key: JCR-3514
 URL: https://issues.apache.org/jira/browse/JCR-3514
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4.2
Reporter: Sarfaraaz ASLAM
Assignee: Stefan Guggisberg

 Can you please verify line 2123 of RepositoryImpl class.
 The condition should rather be  if (!initialized || !active) {
 instead of  if (!initialized || active) {

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3514) Error in RepositoryImpl class

2013-02-12 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576704#comment-13576704
 ] 

Stefan Guggisberg commented on JCR-3514:


bq. The condition should rather be if (!initialized || !active) { 
instead of if (!initialized || active) { 

the condition is correct as is.

a workspace is considered active if there are sessions connected to it or if 
there's a current GC task accessing it.

a workspace is considered idle it it's not active. 

disposeIfIdle should never dispose an active workspace, hence the if-statement.

see also JCR-2749

 Error in RepositoryImpl class
 -

 Key: JCR-3514
 URL: https://issues.apache.org/jira/browse/JCR-3514
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4.2
Reporter: Sarfaraaz ASLAM
Assignee: Stefan Guggisberg

 Can you please verify line 2123 of RepositoryImpl class.
 The condition should rather be  if (!initialized || !active) {
 instead of  if (!initialized || active) {

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3514) Error in RepositoryImpl class

2013-02-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3514.


Resolution: Not A Problem

 Error in RepositoryImpl class
 -

 Key: JCR-3514
 URL: https://issues.apache.org/jira/browse/JCR-3514
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4.2
Reporter: Sarfaraaz ASLAM
Assignee: Stefan Guggisberg

 Can you please verify line 2123 of RepositoryImpl class.
 The condition should rather be  if (!initialized || !active) {
 instead of  if (!initialized || active) {

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3509) Workspace maxIdleTime parameter not working

2013-02-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3509:
---

Priority: Minor  (was: Major)

 Workspace maxIdleTime parameter not working
 ---

 Key: JCR-3509
 URL: https://issues.apache.org/jira/browse/JCR-3509
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: config
Affects Versions: 2.4.3
 Environment: JSF, SPRING
Reporter: Sarfaraaz ASLAM
Priority: Minor
 Attachments: derby.jackrabbit.repository.xml, JcrConfigurer.java


 would like to set the maximum number of seconds that a workspace can remain 
 unused before the workspace is automatically closed through maxIdleTime 
 parameter but this seems not to work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3509) Workspace maxIdleTime parameter not working

2013-02-12 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576708#comment-13576708
 ] 

Stefan Guggisberg commented on JCR-3509:


bq. It should rather be if (!initialized || !active) {

no, the if-statement is correct, see JCR-3514.

please note that a workspace won't be automatically disposed if there's at 
least one session connected to it. 
make sure you call Session#logout if you're done.

 Workspace maxIdleTime parameter not working
 ---

 Key: JCR-3509
 URL: https://issues.apache.org/jira/browse/JCR-3509
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: config
Affects Versions: 2.4.3
 Environment: JSF, SPRING
Reporter: Sarfaraaz ASLAM
 Attachments: derby.jackrabbit.repository.xml, JcrConfigurer.java


 would like to set the maximum number of seconds that a workspace can remain 
 unused before the workspace is automatically closed through maxIdleTime 
 parameter but this seems not to work. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3502) Deleted states are not merged correctly

2013-01-28 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3502:
---

Component/s: clustering

 Deleted states are not merged correctly
 ---

 Key: JCR-3502
 URL: https://issues.apache.org/jira/browse/JCR-3502
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: clustering
Reporter: Unico Hommes
Assignee: Unico Hommes

 When a node is simultaneously deleted on two cluster nodes, the save on the 
 cluster node that lost the race fails unnecessarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507249#comment-13507249
 ] 

Stefan Guggisberg commented on JCR-3452:


bq. Yes, that is correct. BUT, adding additional child node types is making the 
restriction LESS restrictive and should be allowed. 

wrong. the required primary node types of a child node definition are logically 
ANDed during validation, i.e. adding req. types makes the constraint stronger, 
removing OTOH weaker. see [0].

bq. And setting the child node to nt:base makes the restriction LEAST 
restrictive.

agreed. that's an edge case that's currently not handled. since the abstract 
nt:base node type is the root of the node type hierarchy
it is implicitly included in every req. types constraint. explicitly adding or 
removing nt:base has no effect on the constraint.  

[0] 
http://www.day.com/specs/jcr/2.0/3_Repository_Model.html#3.7.4.1%20Required%20Primary%20Node%20Types


 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3452:
---

Issue Type: Improvement  (was: Bug)

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507346#comment-13507346
 ] 

Stefan Guggisberg commented on JCR-3452:


bq. What about the change of property definitions from single to multiple (I 
mentioned this problem in the description)? Do I miss there something, too?

you'e right. changing isMultiple from false to true should be allowed but 
currently isn't. same with changing the requiredType to UNDEFINED. 

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3452.


   Resolution: Fixed
Fix Version/s: 2.6
 Assignee: Stefan Guggisberg

fixed in svn r1415685.

trivial modifications
- adding/removing nt:base as requiredPrimaryType constraint 
- making a single-valued property multi-valued 
- changing a property's requiredType constraint to UNDEFINED

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 2.6

 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (JCR-3452) Modified property and child node definition are rejected

2012-11-30 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3452:
---

Issue Type: Bug  (was: Improvement)

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 2.6

 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3452) Modified property and child node definition are rejected

2012-11-29 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506599#comment-13506599
 ] 

Stefan Guggisberg commented on JCR-3452:


bq. AFAIU, making something more restrictive than before is considered a 
major change because it could make existing content invalid.

correct. changes that might render existing content illegal according to the 
new definition are considered major. 
only changes that have no effect on existing content (e.g. making a mandatory 
item non-mandatory) are allowed.

 Modified property and child node definition are rejected
 

 Key: JCR-3452
 URL: https://issues.apache.org/jira/browse/JCR-3452
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.5.2
Reporter: Tom Quellenberg
Priority: Minor
 Attachments: patch.txt


 NodeTypeDefDiff identifies modified properties and child nodes by 
 QNodeDefinitionId and QPropertyDefinitionId. Both classes have their own 
 equals and hashCode methods. Thus, properties and child nodes with trivial 
 changes (changed required types or isMultiple) are always considered as added 
 and removed ( = major change) and never as changed.
 Additional, the check for required child node types seems wrong to me: adding 
 additional (alternative) constraints are considered as major change. I think, 
 the opposite is true: removing node types from the list of required types is 
 a major change (there may exist child nodes of the removed type), adding 
 alternative constraints is a trivial change.
 There is one more change to the required child node types, which can easily 
 be checked: setting the required type to nt:base. This should always be 
 possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (JCR-3468) ConcurrentModificationException in BitSetENTCacheImpl

2012-11-28 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned JCR-3468:
--

Assignee: Stefan Guggisberg

 ConcurrentModificationException in BitSetENTCacheImpl
 -

 Key: JCR-3468
 URL: https://issues.apache.org/jira/browse/JCR-3468
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.10
Reporter: Jeroen van Erp
Assignee: Stefan Guggisberg
Priority: Critical

 Irregularly the following ConcurrentModificationException occurs in the logs 
 of our application, seems like there is either a sync missing, or a copy to a 
 new collection before iterating.
 {noformat}
 java.util.ConcurrentModificationException: null
 at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100) 
 ~[na:1.6.0_32]
 at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154) ~[na:1.6.0_32]
 at 
 org.apache.jackrabbit.core.nodetype.BitSetENTCacheImpl.findBest(BitSetENTCacheImpl.java:114)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:1082)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:508)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getEffectiveNodeType(NodeImpl.java:776) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getApplicablePropertyDefinition(NodeImpl.java:826)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemManager.getDefinition(ItemManager.java:255) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemData.getDefinition(ItemData.java:101) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyData.getPropertyDefinition(PropertyData.java:55)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyImpl.internalGetValues(PropertyImpl.java:461)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.PropertyImpl.getValues(PropertyImpl.java:498) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (JCR-3468) ConcurrentModificationException in BitSetENTCacheImpl

2012-11-28 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3468.


   Resolution: Fixed
Fix Version/s: 2.6

fixed in svn r1414733

 ConcurrentModificationException in BitSetENTCacheImpl
 -

 Key: JCR-3468
 URL: https://issues.apache.org/jira/browse/JCR-3468
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2.10
Reporter: Jeroen van Erp
Assignee: Stefan Guggisberg
Priority: Critical
 Fix For: 2.6


 Irregularly the following ConcurrentModificationException occurs in the logs 
 of our application, seems like there is either a sync missing, or a copy to a 
 new collection before iterating.
 {noformat}
 java.util.ConcurrentModificationException: null
 at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100) 
 ~[na:1.6.0_32]
 at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154) ~[na:1.6.0_32]
 at 
 org.apache.jackrabbit.core.nodetype.BitSetENTCacheImpl.findBest(BitSetENTCacheImpl.java:114)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:1082)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.getEffectiveNodeType(NodeTypeRegistry.java:508)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getEffectiveNodeType(NodeImpl.java:776) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.NodeImpl.getApplicablePropertyDefinition(NodeImpl.java:826)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemManager.getDefinition(ItemManager.java:255) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.ItemData.getDefinition(ItemData.java:101) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyData.getPropertyDefinition(PropertyData.java:55)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at 
 org.apache.jackrabbit.core.PropertyImpl.internalGetValues(PropertyImpl.java:461)
  ~[jackrabbit-core-2.2.10.jar:2.2.10]
 at org.apache.jackrabbit.core.PropertyImpl.getValues(PropertyImpl.java:498) 
 ~[jackrabbit-core-2.2.10.jar:2.2.10]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (JCR-3453) Jackrabbit might deplate the temporary tablespace on Oracle

2012-10-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486897#comment-13486897
 ] 

Stefan Guggisberg commented on JCR-3453:


FWIW:

bq. * Actually why do you need to use NVL(...) in the column list? Other DB 
filesystem implementations do not have this workaround. 

because oracle is AFAIK the only rdbms which doesn't distinguish empty strings 
or empty lob's from NULL...

for more detailed information have a look at the javadoc ([0]).

[0] 
http://jackrabbit.apache.org/api/2.1/org/apache/jackrabbit/core/fs/db/OracleFileSystem.html#buildSQLStatements()

 Jackrabbit might deplate the temporary tablespace on Oracle
 ---

 Key: JCR-3453
 URL: https://issues.apache.org/jira/browse/JCR-3453
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.1.2, 2.5.2
 Environment: Operating system: Linux
 Application server: Websphere v7
 RDBMS: Oracle 11g
 Jackrabbit: V2.1.2 (built into Liferay 6.0 EE SP2)
Reporter: Laszlo Csontos
 Attachments: repository.xml


 *** Experienced phenomenon ***
 Our customer reported an issue regarding Liferay’s document library: while 
 documents are being retrieved, the following exception occurs accompanied by 
 temporary tablespace shortage.
 [9/24/12 8:00:55:973 CEST] 0023 SystemErr R ERROR 
 [org.apache.jackrabbit.core.util.db.ConnectionHelper:454] Failed to execute 
 SQL (stacktrace on DEBUG log level)
 java.sql.SQLException: ORA-01652: unable to extend temp segment by 128 in 
 tablespace TEMP
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
 …
 at 
 oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1374)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecute(WSJdbcPreparedStatement.java:928)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.execute(WSJdbcPreparedStatement.java:614)
 …
 at 
 org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:328)
 at 
 org.apache.jackrabbit.core.fs.db.DatabaseFileSystem.getInputStream(DatabaseFileSystem.java:663)
 at 
 org.apache.jackrabbit.core.fs.BasedFileSystem.getInputStream(BasedFileSystem.java:121)
 at 
 org.apache.jackrabbit.core.fs.FileSystemResource.getInputStream(FileSystemResource.java:149)
 at 
 org.apache.jackrabbit.core.RepositoryImpl.loadRootNodeId(RepositoryImpl.java:556)
 at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:325)
 at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:673)
 at 
 org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:231)
 at 
 org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:279)
 at 
 org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:375)
 at 
 com.liferay.portal.jcr.jackrabbit.JCRFactoryImpl.createSession(JCRFactoryImpl.java:67)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:43)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:47)
 at com.liferay.documentlibrary.util.JCRHook.getFileAsStream(JCRHook.java:472)
 at 
 com.liferay.documentlibrary.util.HookProxyImpl.getFileAsStream(HookProxyImpl.java:149)
 at 
 com.liferay.documentlibrary.util.SafeFileNameHookWrapper.getFileAsStream(SafeFileNameHookWrapper.java:236)
 at 
 com.liferay.documentlibrary.service.impl.DLLocalServiceImpl.getFileAsStream(DLLocalServiceImpl.java:192)
 The original size of tablespace TEMP used to be 8Gb when the error has 
 occurred for the first time. Later on it was extended by as much as 
 additional 7Gb to 15Gb, yet the available space was still not sufficient to 
 fulfill subsequent requests and ORA-01652 emerged again.
 *** Reproduction steps ***
 1) Create a dummy 10MB file
 $ dd if=/dev/urandom of=/path/to/dummy_blob bs=8192 count=1280
 1280+0 records in
 1280+0 records out
 10485760 bytes (10 MB) copied, 0.722818 s, 14.5 MB/s
 2) Create a temp tablespace
 The tablespace is created with 5Mb and automatic expansion is intentionally 
 disabled.
 SQL CREATE TEMPORARY TABLESPACE jcr_temp
   TEMPFILE '/path/to/jcr_temp_01.dbf'
   SIZE 5M AUTOEXTEND OFF;
 Table created.
 SQL ALTER USER jcr TEMPORARY TABLESPACE jcr_temp;
 User altered.
 3) Prepare the test case
 For the sake of simplicity a dummy table is created (similar to Jackrabbit's 
 FSENTRY).
 SQL create table FSENTRY(data blob);
 Table created.
 SQL
 CREATE OR REPLACE PROCEDURE load_blob
 AS
 dest_loc  BLOB;
 src_loc   BFILE := BFILENAME('DATA_PUMP_DIR', 'dummy_blob');
 BEGIN
 INSERT INTO FSENTRY (data)
 VALUES (empty_blob())
 

[jira] [Commented] (JCR-3453) Jackrabbit might deplate the temporary tablespace on Oracle

2012-10-30 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486993#comment-13486993
 ] 

Stefan Guggisberg commented on JCR-3453:


bq. Actually I'm ready to contribute this enhancement to Jackrabbit.

excellent!

bq. If you could modify my attached repository.xml file so that it use 
Oracle9FileSystem  Oracle9PersistenceManager and certify that that 
configuration is going to work on Oracle 11gR2, I'd like to change this ticket 
to improvement.

sorry, i have neither the time nor an oracle install at hand.

 Jackrabbit might deplate the temporary tablespace on Oracle
 ---

 Key: JCR-3453
 URL: https://issues.apache.org/jira/browse/JCR-3453
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.1.2, 2.5.2
 Environment: Operating system: Linux
 Application server: Websphere v7
 RDBMS: Oracle 11g
 Jackrabbit: V2.1.2 (built into Liferay 6.0 EE SP2)
Reporter: Laszlo Csontos
 Attachments: repository.xml


 *** Experienced phenomenon ***
 Our customer reported an issue regarding Liferay’s document library: while 
 documents are being retrieved, the following exception occurs accompanied by 
 temporary tablespace shortage.
 [9/24/12 8:00:55:973 CEST] 0023 SystemErr R ERROR 
 [org.apache.jackrabbit.core.util.db.ConnectionHelper:454] Failed to execute 
 SQL (stacktrace on DEBUG log level)
 java.sql.SQLException: ORA-01652: unable to extend temp segment by 128 in 
 tablespace TEMP
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
 …
 at 
 oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1374)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.pmiExecute(WSJdbcPreparedStatement.java:928)
 at 
 com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.execute(WSJdbcPreparedStatement.java:614)
 …
 at 
 org.apache.jackrabbit.core.util.db.ConnectionHelper.exec(ConnectionHelper.java:328)
 at 
 org.apache.jackrabbit.core.fs.db.DatabaseFileSystem.getInputStream(DatabaseFileSystem.java:663)
 at 
 org.apache.jackrabbit.core.fs.BasedFileSystem.getInputStream(BasedFileSystem.java:121)
 at 
 org.apache.jackrabbit.core.fs.FileSystemResource.getInputStream(FileSystemResource.java:149)
 at 
 org.apache.jackrabbit.core.RepositoryImpl.loadRootNodeId(RepositoryImpl.java:556)
 at org.apache.jackrabbit.core.RepositoryImpl.init(RepositoryImpl.java:325)
 at org.apache.jackrabbit.core.RepositoryImpl.create(RepositoryImpl.java:673)
 at 
 org.apache.jackrabbit.core.TransientRepository$2.getRepository(TransientRepository.java:231)
 at 
 org.apache.jackrabbit.core.TransientRepository.startRepository(TransientRepository.java:279)
 at 
 org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:375)
 at 
 com.liferay.portal.jcr.jackrabbit.JCRFactoryImpl.createSession(JCRFactoryImpl.java:67)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:43)
 at com.liferay.portal.jcr.JCRFactoryUtil.createSession(JCRFactoryUtil.java:47)
 at com.liferay.documentlibrary.util.JCRHook.getFileAsStream(JCRHook.java:472)
 at 
 com.liferay.documentlibrary.util.HookProxyImpl.getFileAsStream(HookProxyImpl.java:149)
 at 
 com.liferay.documentlibrary.util.SafeFileNameHookWrapper.getFileAsStream(SafeFileNameHookWrapper.java:236)
 at 
 com.liferay.documentlibrary.service.impl.DLLocalServiceImpl.getFileAsStream(DLLocalServiceImpl.java:192)
 The original size of tablespace TEMP used to be 8Gb when the error has 
 occurred for the first time. Later on it was extended by as much as 
 additional 7Gb to 15Gb, yet the available space was still not sufficient to 
 fulfill subsequent requests and ORA-01652 emerged again.
 *** Reproduction steps ***
 1) Create a dummy 10MB file
 $ dd if=/dev/urandom of=/path/to/dummy_blob bs=8192 count=1280
 1280+0 records in
 1280+0 records out
 10485760 bytes (10 MB) copied, 0.722818 s, 14.5 MB/s
 2) Create a temp tablespace
 The tablespace is created with 5Mb and automatic expansion is intentionally 
 disabled.
 SQL CREATE TEMPORARY TABLESPACE jcr_temp
   TEMPFILE '/path/to/jcr_temp_01.dbf'
   SIZE 5M AUTOEXTEND OFF;
 Table created.
 SQL ALTER USER jcr TEMPORARY TABLESPACE jcr_temp;
 User altered.
 3) Prepare the test case
 For the sake of simplicity a dummy table is created (similar to Jackrabbit's 
 FSENTRY).
 SQL create table FSENTRY(data blob);
 Table created.
 SQL
 CREATE OR REPLACE PROCEDURE load_blob
 AS
 dest_loc  BLOB;
 src_loc   BFILE := BFILENAME('DATA_PUMP_DIR', 'dummy_blob');
 BEGIN
 INSERT INTO FSENTRY (data)
 VALUES (empty_blob())
 RETURNING data INTO dest_loc;
 DBMS_LOB.OPEN(src_loc, 

[jira] [Commented] (JCR-3424) hit ORA-12899 when add/save node

2012-09-11 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452802#comment-13452802
 ] 

Stefan Guggisberg commented on JCR-3424:


FWIW, here are the relevant log entries:

org.apache.jackrabbit.core.state.ItemStateException: failed to write bundle: 
21a24c48-670f-4676-8e06-94b628f833b4
[...]
Caused by: java.sql.SQLException: ORA-12899: value too large for column 
LYIN1.VTJP_DEF_BUNDLE.NODE_ID (actual: 81, maximum: 16) 

just wild guesses: an arithmetic overflow perhaps? a sql driver bug?



 hit ORA-12899 when add/save node
 

 Key: JCR-3424
 URL: https://issues.apache.org/jira/browse/JCR-3424
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.2
 Environment: JBoss4.2.3 + Oracle 10.2.0.4 + Winxp
Reporter: licheng

 When running one longevity test, we hit ORA-12899 once when saving one node. 
 It's hard to reproduce it in our test.  But someone else also hit this issue 
 before.
 2012-09-06 13:11:30,485 ERROR 
 [org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager] 
 Failed to persist ChangeLog (stacktrace on DEBUG log level), 
 blockOnConnectionLoss = false
 org.apache.jackrabbit.core.state.ItemStateException: failed to write bundle: 
 21a24c48-670f-4676-8e06-94b628f833b4
   at 
 org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.storeBundle(BundleDbPersistenceManager.java:1086)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.putBundle(AbstractBundlePersistenceManager.java:684)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.storeInternal(AbstractBundlePersistenceManager.java:626)
   at 
 org.apache.jackrabbit.core.persistence.bundle.AbstractBundlePersistenceManager.store(AbstractBundlePersistenceManager.java:503)
   at 
 org.apache.jackrabbit.core.persistence.pool.BundleDbPersistenceManager.store(BundleDbPersistenceManager.java:479)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:757)
   at 
 org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1487)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351)
   at 
 org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
   at 
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:326)
   at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:289)
   at 
 org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:258)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
   at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.access.JcrAccessUtil.createRepositoryNodeWithJcrName(JcrAccessUtil.java:125)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.access.JcrAccessUtil.createRepositoryNode(JcrAccessUtil.java:85)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.JcrPersistentObjectFactory.createChildNonVersionableLeaveNode(JcrPersistentObjectFactory.java:169)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.JcrPersistentObjectFactory.createChildLeaveNode(JcrPersistentObjectFactory.java:65)
   at 
 com.vitria.modeling.repository.sapi.service.jcr.JcrInternalNodeImpl.createLeaveChild(JcrInternalNodeImpl.java:66)
   at 
 com.vitria.modeling.repository.sapi.service.core.CoreModelContainer.createModel(CoreModelContainer.java:80)
   at 
 com.vitria.modeling.repository.sapi.service.proxy.local.LocalModelContainer.createModel(LocalModelContainer.java:167)
 .
 Caused by: java.sql.SQLException: ORA-12899: value too large for column 
 LYIN1.VTJP_DEF_BUNDLE.NODE_ID (actual: 81, maximum: 16)
   at 
 oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
   at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
   at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
   at 
 oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:219)
   at 
 oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:970)
   at 
 oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1190)
   at 
 oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370)
   at 
 

[jira] [Commented] (OAK-267) Repository fails to start with - cannot branch off a private branch

2012-08-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13440422#comment-13440422
 ] 

Stefan Guggisberg commented on OAK-267:
---

while i still can't explain how a private branch commit could possibly become a 
HEAD 
i've added code in svn r1376578 that throws an exception should the same error 
occur again.
the callstack of that exception should hopefully help investigating the root 
cause.


 Repository fails to start with - cannot branch off a private branch
 -

 Key: OAK-267
 URL: https://issues.apache.org/jira/browse/OAK-267
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.5
Reporter: Chetan Mehrotra
Priority: Minor
 Attachments: stacktrace.txt


 On starting a Sling instance I am get following exception (complete 
 stacktrace would be attached)
 {noformat}
 org.apache.jackrabbit.mk.api.MicroKernelException: java.lang.Exception: 
 cannot branch off a private branch
   at 
 org.apache.jackrabbit.mk.core.MicroKernelImpl.branch(MicroKernelImpl.java:508)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.init(KernelNodeStoreBranch.java:56)
   at 
 org.apache.jackrabbit.oak.kernel.KernelNodeStore.branch(KernelNodeStore.java:101)
   at org.apache.jackrabbit.oak.core.RootImpl.refresh(RootImpl.java:160)
   at org.apache.jackrabbit.oak.core.RootImpl.init(RootImpl.java:111)
   at 
 org.apache.jackrabbit.oak.core.ContentSessionImpl.getCurrentRoot(ContentSessionImpl.java:78)
   at 
 org.apache.jackrabbit.oak.jcr.SessionDelegate.init(SessionDelegate.java:94)
   at 
 org.apache.jackrabbit.oak.jcr.RepositoryImpl.login(RepositoryImpl.java:137)
 {noformat}
 This error does not away after restart also. On debugging 
 StoredCommit#branchRootId was not null.
 I think the last shutdown was clean as per the logs. I was debugging some 
 query issue which resulted in an exception so that might have something to do 
 with this. Logging a bug for now to keep track. If complete repo (~70 MB of 
 .mk folder)is required then let me know and I would share it somewhere 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-272) every session login causes a mk.branch operation

2012-08-22 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-272:
-

 Summary: every session login causes a mk.branch operation
 Key: OAK-272
 URL: https://issues.apache.org/jira/browse/OAK-272
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: core
Reporter: Stefan Guggisberg


here's the relevant stack trace (copied from OAK-267):
{code}
at 
org.apache.jackrabbit.mk.core.MicroKernelImpl.branch(MicroKernelImpl.java:508)
at 
org.apache.jackrabbit.oak.kernel.KernelNodeStoreBranch.init(KernelNodeStoreBranch.java:56)
at 
org.apache.jackrabbit.oak.kernel.KernelNodeStore.branch(KernelNodeStore.java:101)
at org.apache.jackrabbit.oak.core.RootImpl.refresh(RootImpl.java:160)
at org.apache.jackrabbit.oak.core.RootImpl.init(RootImpl.java:111)
at 
org.apache.jackrabbit.oak.core.ContentSessionImpl.getCurrentRoot(ContentSessionImpl.java:78)
at org.apache.jackrabbit.oak.jcr.SessionDelegate.init(SessionDelegate.java:94)
at org.apache.jackrabbit.oak.jcr.RepositoryImpl.login(RepositoryImpl.java:137)
[...]
{code}

while investigating OAK-267 i've noticed 40k empty branch commits all based on 
the same head revision.
those branch commits seem to be unnecessary since they're all empty (i.e. not 
followed by write operations).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-11) Document and tighten contract of Microkernel API

2012-08-21 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438508#comment-13438508
 ] 

Stefan Guggisberg commented on OAK-11:
--

+1 for julian's proposal

 Document and tighten contract of Microkernel API
 

 Key: OAK-11
 URL: https://issues.apache.org/jira/browse/OAK-11
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: documentation

 We should do a review of the Microkernel API with the goal to clarify, 
 disambiguate and document its contract.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-11) Document and tighten contract of Microkernel API

2012-08-21 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438550#comment-13438550
 ] 

Stefan Guggisberg commented on OAK-11:
--

bq. getLength method is kind of vague too. It doesn't specify what should 
happen when blob does not exist. Does it return 0, -1, or throw 
MicroKernelException?
 
good point, clarified 'getLength' and 'read' methods in svn r1375435

 Document and tighten contract of Microkernel API
 

 Key: OAK-11
 URL: https://issues.apache.org/jira/browse/OAK-11
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Michael Dürig
Assignee: Julian Reschke
  Labels: documentation

 We should do a review of the Microkernel API with the goal to clarify, 
 disambiguate and document its contract.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-264) MicroKernel.diff for depth limited, unspecified changes

2012-08-21 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-264.
---

   Resolution: Fixed
Fix Version/s: 0.5

good point! 

fixed in svn r1375476

 MicroKernel.diff for depth limited, unspecified changes
 ---

 Key: OAK-264
 URL: https://issues.apache.org/jira/browse/OAK-264
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Thomas Mueller
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.5


 Currently the MicroKernel API specifies for the method diff, if the depth 
 parameter is used, that unspecified changes below a certain path can be 
 returned as:
   ^ /some/path
 I would prefer the slightly more verbose:
   ^ /some/path: {}
 Reason: It is similar to how getNode() returns node names if the depth 
 limited: some:{path:{}}, and it makes parsing unambiguous: there is 
 always a ':' after the path, whether a property was changed or a node was 
 changed. Without the colon, the parser needs to look ahead to decide whether 
 a node was changed or a property was changed (the token after the path could 
 be the start of the next operation). And we could never ever support ':' as 
 an operation because that would make parsing ambiguous.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (OAK-265) waitForCommit gets triggered on private branch commits

2012-08-21 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned OAK-265:
-

Assignee: Stefan Guggisberg

 waitForCommit gets triggered on private branch commits
 --

 Key: OAK-265
 URL: https://issues.apache.org/jira/browse/OAK-265
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.5


 waitForCommit should on be triggered on new (public) head revisions 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-254) waitForCommit returns null in certain situations

2012-08-17 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-254.
---

Resolution: Fixed

fixed in svn r1374228

 waitForCommit returns null in certain situations
 

 Key: OAK-254
 URL: https://issues.apache.org/jira/browse/OAK-254
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.5


 waitForCommit() returns null if there were no commits since startup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-239) MicroKernel.getRevisionHistory: maxEntries behavior should be documented

2012-08-14 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-239.
---

   Resolution: Fixed
Fix Version/s: 0.5

fixed in svn r1372850.

 MicroKernel.getRevisionHistory: maxEntries behavior should be documented
 

 Key: OAK-239
 URL: https://issues.apache.org/jira/browse/OAK-239
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Thomas Mueller
Priority: Minor
 Fix For: 0.5


 The method MicroKernel.getRevisionHistory uses a parameter maxEntries to 
 limit the number of returned entries. If the implementation has to limit the 
 entries, it is not clear from the documentation which entries to return (the 
 oldest entries, the newest entries, or any x entries).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-227) MicroKernel API: add depth parameter to diff method

2012-08-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-227:
--

Description: 
a depth parameter allows to specify how deep changes should be included in the 
returned JSON diff. 

an example:

{code}
// initial content (/test/foo)
String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );

// add /test/foo/bar
String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );

// modify property /test/foo/bar/p1
String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );

// diff with depth -1
String diff0 = mk.diff(rev0, rev2, /, -1);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 5
String diff1 = mk.diff(rev0, rev2, /, 5);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 1
String diff2 = mk.diff(rev0, rev2, /, 1);
// returned ^/test/foo, indicating that there are changes below /test/foo 

// diff with depth 0
String diff3 = mk.diff(rev0, rev2, /, 0);
// returned ^/test, indicating that there are changes below /test 
{code}

  was:
a depth parameter allows to specify how deep changes should be included in the 
returned JSON diff. 

an example:

{code}
// initial content (/test/foo)
String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );

// add /test/foo/bar
String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );

// modify property /test/foo/bar/p1
String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );

// diff with depth 5
String diff1 = mk.diff(rev0, rev2, /, 5);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 1
String diff1 = mk.diff(rev0, rev2, /, 1);
// returned ^/test, indicating that there are changes below /test 
{code}





 MicroKernel API: add depth parameter to diff method
 ---

 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 a depth parameter allows to specify how deep changes should be included in 
 the returned JSON diff. 
 an example:
 {code}
 // initial content (/test/foo)
 String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );
 // add /test/foo/bar
 String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );
 // modify property /test/foo/bar/p1
 String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );
 // diff with depth -1
 String diff0 = mk.diff(rev0, rev2, /, -1);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 5
 String diff1 = mk.diff(rev0, rev2, /, 5);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 1
 String diff2 = mk.diff(rev0, rev2, /, 1);
 // returned ^/test/foo, indicating that there are changes below /test/foo 
 // diff with depth 0
 String diff3 = mk.diff(rev0, rev2, /, 0);
 // returned ^/test, indicating that there are changes below /test 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-227) MicroKernel API: add depth parameter to diff method

2012-08-03 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-227.
---

Resolution: Fixed

fixed in svn r1369019

 MicroKernel API: add depth parameter to diff method
 ---

 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 a depth parameter allows to specify how deep changes should be included in 
 the returned JSON diff. 
 an example:
 {code}
 // initial content (/test/foo)
 String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );
 // add /test/foo/bar
 String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );
 // modify property /test/foo/bar/p1
 String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );
 // diff with depth -1
 String diff0 = mk.diff(rev0, rev2, /, -1);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 5
 String diff1 = mk.diff(rev0, rev2, /, 5);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 1
 String diff2 = mk.diff(rev0, rev2, /, 1);
 // returned ^/test/foo, indicating that there are changes below /test/foo 
 // diff with depth 0
 String diff3 = mk.diff(rev0, rev2, /, 0);
 // returned ^/test, indicating that there are changes below /test 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-227) add depth parameter to MicroKernel#diff method

2012-08-02 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-227:
-

 Summary: add depth parameter to MicroKernel#diff method
 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg


a depth parameter allows to specify how deep changes should be included in the 
returned JSON diff. 

an example:

{code}
// initial content (/test/foo)
String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );

// add /test/foo/bar
String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );

// modify property /test/foo/bar/p1
String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );

// diff with depth 5
String diff1 = mk.diff(rev0, rev2, /, 5);
// returned +/test/foo/bar:{p1:456} 

// diff with depth 1
String diff1 = mk.diff(rev0, rev2, /, 1);
// returned ^/test, indicating that there are changes below /test 
{code}




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-227) MicroKernel API: add depth parameter to diff method

2012-08-02 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-227:
--

Summary: MicroKernel API: add depth parameter to diff method  (was: add 
depth parameter to MicroKernel#diff method)

 MicroKernel API: add depth parameter to diff method
 ---

 Key: OAK-227
 URL: https://issues.apache.org/jira/browse/OAK-227
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 a depth parameter allows to specify how deep changes should be included in 
 the returned JSON diff. 
 an example:
 {code}
 // initial content (/test/foo)
 String rev0 = mk.commit(/, +\test\:{\foo\:{}}, null, );
 // add /test/foo/bar
 String rev1 = mk.commit(/test/foo, +\bar\:{\p1\:123}, null, );
 // modify property /test/foo/bar/p1
 String rev2 = mk.commit(/test/foo/bar, ^\p1\:456, null, );
 // diff with depth 5
 String diff1 = mk.diff(rev0, rev2, /, 5);
 // returned +/test/foo/bar:{p1:456} 
 // diff with depth 1
 String diff1 = mk.diff(rev0, rev2, /, 1);
 // returned ^/test, indicating that there are changes below /test 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-77) Consolidate Utilities

2012-07-27 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-77?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-77:
-

Component/s: (was: mk)

removing mk component, there are IMO no redundant utility classes anymore in mk 
worth refactoring 

 Consolidate Utilities
 -

 Key: OAK-77
 URL: https://issues.apache.org/jira/browse/OAK-77
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core, jcr
Reporter: angela
Priority: Minor

 as discussed on the dev list i would like to consolidate the various
 utilities. getting rid of redundancies etc

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-210:
-

 Summary: granularity of persisted data
 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg


the current persistence granularity is _single nodes_ (a node consists of 
properties and child node information). 

instead of storing/retrieving single nodes it would IMO make sense to store 
subtree aggregates of specific nodes. the choice of granularity could be based 
on simple filter criteria (e.g. property value).

dynamic persistence granularity would help reducing the number of records and 
r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned OAK-210:
-

Assignee: Stefan Guggisberg

 granularity of persisted data
 -

 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the current persistence granularity is _single nodes_ (a node consists of 
 properties and child node information). 
 instead of storing/retrieving single nodes it would IMO make sense to store 
 subtree aggregates of specific nodes. the choice of granularity could be 
 based on simple filter criteria (e.g. property value).
 dynamic persistence granularity would help reducing the number of records and 
 r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423864#comment-13423864
 ] 

Stefan Guggisberg commented on OAK-210:
---

bq. Do you see this as something to be implemented (or not) by each MK 
independently (i.e. something like an MK implementation detail)?

no, that's an implementation detail that doesn't affect the semantics of the 
MicroKernel API.

the most notable impact will be that the current implementation won't be able 
to provide {{:hash}} values for _every_ node. but that's already explicitly 
allowed for, see {{Microkernel#getNodes}} java doc.

 granularity of persisted data
 -

 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the current persistence granularity is _single nodes_ (a node consists of 
 properties and child node information). 
 instead of storing/retrieving single nodes it would IMO make sense to store 
 subtree aggregates of specific nodes. the choice of granularity could be 
 based on simple filter criteria (e.g. property value).
 dynamic persistence granularity would help reducing the number of records and 
 r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-210) granularity of persisted data

2012-07-27 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-210:
--

Issue Type: Improvement  (was: Bug)

 granularity of persisted data
 -

 Key: OAK-210
 URL: https://issues.apache.org/jira/browse/OAK-210
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the current persistence granularity is _single nodes_ (a node consists of 
 properties and child node information). 
 instead of storing/retrieving single nodes it would IMO make sense to store 
 subtree aggregates of specific nodes. the choice of granularity could be 
 based on simple filter criteria (e.g. property value).
 dynamic persistence granularity would help reducing the number of records and 
 r/w operations on the underlying store, thus improving performance.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (JCR-3246) RepositoryImpl attempts to close active session twice on shutdown

2012-07-20 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reopened JCR-3246:



reopening based on nick's comment/analysis

(thanks, nick!)

 RepositoryImpl attempts to close active session twice on shutdown
 -

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Critical

 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3246) UserManagerImpl attempts to close session twice on shutdown

2012-07-20 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3246:
---

Component/s: (was: jackrabbit-core)
 security
   Priority: Minor  (was: Critical)
Summary: UserManagerImpl attempts to close session twice on shutdown  
(was: RepositoryImpl attempts to close active session twice on shutdown)

 UserManagerImpl attempts to close session twice on shutdown
 ---

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: security
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Minor

 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3246) UserManagerImpl attempts to close session twice on shutdown

2012-07-20 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3246:
---

Attachment: JCR-3246.patch

proposed patch

 UserManagerImpl attempts to close session twice on shutdown
 ---

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: security
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Minor
 Attachments: JCR-3246.patch


 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (JCR-3368) CachingHierarchyManager: inconsistent state after transient changes on root node

2012-07-18 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13417069#comment-13417069
 ] 

Stefan Guggisberg commented on JCR-3368:


 It seems that the special handling of the root node has nothing to do with 
 this issue. 

i cannot confirm this. AFAIU this issue only occurs when removing a direct 
child of the root node.
CachingHierarchyManager (CHM) keeps a reference on the root node's item state 
which seems
to become stale under certain cirumstances. 

i tried your altered test case. it failed on 
   
session.getNode(/foo/bar/qux); 

but that was because your test case doesn't clean up the test data. there were 
/foo.../foo[n]
nodes from previous test runs. when cleaning the test data before/after each 
run the test 
doesn't fail anymore. 

 CachingHierarchyManager: inconsistent state after transient changes on root 
 node 
 -

 Key: JCR-3368
 URL: https://issues.apache.org/jira/browse/JCR-3368
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.2.12, 2.4.2, 2.5
Reporter: Unico Hommes
 Attachments: HasNodeAfterRemoveTest.java


 See attached test case.
 You will see the following exception:
 javax.jcr.RepositoryException: failed to retrieve state of intermediary node
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:156)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolveNodePath(HierarchyManagerImpl.java:372)
 at org.apache.jackrabbit.core.NodeImpl.getNodeId(NodeImpl.java:276)
 at 
 org.apache.jackrabbit.core.NodeImpl.resolveRelativeNodePath(NodeImpl.java:223)
 at org.apache.jackrabbit.core.NodeImpl.hasNode(NodeImpl.java:2250)
 at 
 org.apache.jackrabbit.core.HasNodeAfterRemoveTest.testRemove(HasNodeAfterRemoveTest.java:14)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at 
 org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:456)
 at junit.framework.TestSuite.runTest(TestSuite.java:243)
 at junit.framework.TestSuite.run(TestSuite.java:238)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
 at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127)
 at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009)
 Caused by: org.apache.jackrabbit.core.state.NoSuchItemStateException: 
 c7ccbcd3-0524-4d4d-a109-eae84627f94e
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getTransientItemState(SessionItemStateManager.java:304)
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:153)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.getItemState(HierarchyManagerImpl.java:152)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolvePath(HierarchyManagerImpl.java:115)
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:152)
 ... 29 more
 I tried several things to fix this but didn't find a better solution than to 
 just wrap the statement
 NodeId id = resolveRelativeNodePath(relPath);
 in a try catch RepositoryException and return false when that exception 
 occurs.
 In particular I tried 

[jira] [Updated] (JCR-3173) InvalidItemStateException if accessing VersionHistory before checkin()

2012-07-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3173:
---

Component/s: versioning
 transactions

 InvalidItemStateException if accessing VersionHistory before checkin()
 --

 Key: JCR-3173
 URL: https://issues.apache.org/jira/browse/JCR-3173
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, transactions, versioning
Affects Versions: 2.2.10
Reporter: Matthias Reischenbacher
 Attachments: UserTransactionCheckinTest.java


 A checkin operation fails during a transaction if the VersionHistory of a 
 node is accessed previously. See the attached test case for further details.
 ---
 Test set: org.apache.jackrabbit.core.version.UserTransactionCheckinTest
 ---
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.072 sec  
 FAILURE!
 testRestoreWithXA(org.apache.jackrabbit.core.version.UserTransactionCheckinTest)
   Time elapsed: 3.858 sec   ERROR!
 javax.jcr.InvalidItemStateException: Could not find child 
 e77834ee-244c-441f-ab94-19847c769fa4 of node 
 03629609-8049-46ee-9e80-279c70b3a34d
   at 
 org.apache.jackrabbit.core.ItemManager.getDefinition(ItemManager.java:207)
   at org.apache.jackrabbit.core.ItemData.getDefinition(ItemData.java:99)
   at org.apache.jackrabbit.core.ItemManager.canRead(ItemManager.java:421)
   at 
 org.apache.jackrabbit.core.ItemManager.createItemData(ItemManager.java:843)
   at 
 org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:391)
   at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:328)
   at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:622)
   at 
 org.apache.jackrabbit.core.SessionImpl.getNodeById(SessionImpl.java:493)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl$1.perform(VersionManagerImpl.java:123)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl$1.perform(VersionManagerImpl.java:1)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:200)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl.perform(VersionManagerImpl.java:96)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl.checkin(VersionManagerImpl.java:115)
   at 
 org.apache.jackrabbit.core.VersionManagerImpl.checkin(VersionManagerImpl.java:101)
   at org.apache.jackrabbit.core.NodeImpl.checkin(NodeImpl.java:2830)
   at 
 org.apache.jackrabbit.core.version.UserTransactionCheckinTest.testRestoreWithXA(UserTransactionCheckinTest.java:35)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3379) XA concurrent transactions - NullPointerException

2012-07-10 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3379:
---

Component/s: versioning
 transactions

 XA concurrent transactions - NullPointerException
 -

 Key: JCR-3379
 URL: https://issues.apache.org/jira/browse/JCR-3379
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core, transactions, versioning
Affects Versions: 2.4.2, 2.5
 Environment: java version 1.6.0_26
 Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
 Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
 Linux dev 2.6.32-5-amd64 #1 SMP Thu Mar 22 17:26:33 UTC 2012 x86_64 GNU/Linux
Reporter: Stanislav Dvorscak
Assignee: Claus Köll
 Attachments: JCR-3379.patch


 If several threads are working with XA transactions, the NullPointerException 
 is randomly happened. After that each other transaction will deadlock on the 
 Jackrabbit side, and the restart of the server is necessary.
 The exception is:
 Exception in thread executor-13 java.lang.NullPointerException
   at 
 org.apache.jackrabbit.core.version.VersioningLock$XidRWLock.isSameGlobalTx(VersioningLock.java:116)
   at 
 org.apache.jackrabbit.core.version.VersioningLock$XidRWLock.allowReader(VersioningLock.java:126)
   at 
 org.apache.jackrabbit.core.version.VersioningLock$XidRWLock.endWrite(VersioningLock.java:161)
   at 
 EDU.oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$WriterLock.release(Unknown
  Source)
   at 
 org.apache.jackrabbit.core.version.VersioningLock$WriteLock.release(VersioningLock.java:76)
   at 
 org.apache.jackrabbit.core.version.InternalXAVersionManager$2.internalReleaseWriteLock(InternalXAVersionManager.java:703)
   at 
 org.apache.jackrabbit.core.version.InternalXAVersionManager$2.commit(InternalXAVersionManager.java:691)
   at 
 org.apache.jackrabbit.core.TransactionContext.commit(TransactionContext.java:195)
   at 
 org.apache.jackrabbit.core.XASessionImpl.commit(XASessionImpl.java:326)
   at 
 org.apache.jackrabbit.rmi.server.ServerXASession.commit(ServerXASession.java:58)
   at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305)
   at sun.rmi.transport.Transport$1.run(Transport.java:159)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:155)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
   at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)
   at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)
   at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142)
   at org.apache.jackrabbit.rmi.server.ServerXASession_Stub.commit(Unknown 
 Source)
   at 
 org.apache.jackrabbit.rmi.client.ClientXASession.commit(ClientXASession.java:74)
   at org.objectweb.jotm.SubCoordinator.doCommit(SubCoordinator.java:1123)
   at 
 org.objectweb.jotm.SubCoordinator.commit_one_phase(SubCoordinator.java:483)
   at org.objectweb.jotm.TransactionImpl.commit(TransactionImpl.java:318)
   at org.objectweb.jotm.Current.commit(Current.java:452)
   at 
 org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1010)
   at 
 org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
   at 
 org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
   at 
 org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
   at 
 org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
   at 
 

[jira] [Commented] (OAK-169) Support orderable nodes

2012-07-10 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13410324#comment-13410324
 ] 

Stefan Guggisberg commented on OAK-169:
---

FWIW: here's the relevant discussion on the oak-dev list: 
http://thread.gmane.org/gmane.comp.apache.jackrabbit.devel/34124/focus=34277

 Support orderable nodes
 ---

 Key: OAK-169
 URL: https://issues.apache.org/jira/browse/OAK-169
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: jcr
Reporter: Jukka Zitting

 There are JCR clients that depend on the ability to explicitly specify the 
 order of child nodes. That functionality is not included in the MicroKernel 
 tree model, so we need to implement it either in oak-core or oak-jcr using 
 something like an extra (hidden) {{oak:childOrder}} property that records the 
 specified ordering of child nodes. A multi-valued string property is probably 
 good enough for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-169) Support orderable nodes

2012-07-10 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13410364#comment-13410364
 ] 

Stefan Guggisberg commented on OAK-169:
---

bq. There is currently no statement about iteration order stability in the 
Microkernel API contract.

good point, fixed in svn rev. 1359679

 Support orderable nodes
 ---

 Key: OAK-169
 URL: https://issues.apache.org/jira/browse/OAK-169
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: jcr
Reporter: Jukka Zitting

 There are JCR clients that depend on the ability to explicitly specify the 
 order of child nodes. That functionality is not included in the MicroKernel 
 tree model, so we need to implement it either in oak-core or oak-jcr using 
 something like an extra (hidden) {{oak:childOrder}} property that records the 
 specified ordering of child nodes. A multi-valued string property is probably 
 good enough for this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-167) Caching NodeStore implementation

2012-07-05 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406581#comment-13406581
 ] 

Stefan Guggisberg edited comment on OAK-167 at 7/5/12 8:44 AM:
---

 Such a NodeStore implementation could also be used to better isolate the 
 current caching logic behind uncommitted changes.

wouldn't that ideally be the transient space, i.e. oak-jcr?

  was (Author: stefan@jira):
 Such a NodeStore implementation could also be used to better isolate the 
current caching logic behind uncommitted changes.

wouldn't that ideally the transient space, i.e. oak-jcr?
  
 Caching NodeStore implementation
 

 Key: OAK-167
 URL: https://issues.apache.org/jira/browse/OAK-167
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Jukka Zitting

 For remote MicroKernel implementations and other cases where local caching of 
 content is needed it would be useful to have a NodeStore implementation that 
 maintains a simple in-memory or on-disk cache of frequently accessed content. 
 Such a NodeStore implementation could also be used to better isolate the 
 current caching logic behind uncommitted changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-167) Caching NodeStore implementation

2012-07-05 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13407028#comment-13407028
 ] 

Stefan Guggisberg commented on OAK-167:
---

IMO we have no choice but buffer transient changes (- 'transient space') in 
oak-jcr. otherwise i don't see how we could efficiently remote the JCR api. 
buffering the transient changes in oak-core would mean that every Node.addNode 
and Node.setProperty would trigger a server round-trip. that's not an option. 
see also OAK-162.

 Caching NodeStore implementation
 

 Key: OAK-167
 URL: https://issues.apache.org/jira/browse/OAK-167
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Jukka Zitting

 For remote MicroKernel implementations and other cases where local caching of 
 content is needed it would be useful to have a NodeStore implementation that 
 maintains a simple in-memory or on-disk cache of frequently accessed content. 
 Such a NodeStore implementation could also be used to better isolate the 
 current caching logic behind uncommitted changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-167) Caching NodeStore implementation

2012-07-05 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13407038#comment-13407038
 ] 

Stefan Guggisberg commented on OAK-167:
---

bq. That's a separate issue (OAK-162 as you mentioned). As explained above, 
whatever we do with the transient space, we in any case need some way to 
represent uncommitted changes in oak-core.

ok, agreed

 Caching NodeStore implementation
 

 Key: OAK-167
 URL: https://issues.apache.org/jira/browse/OAK-167
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Jukka Zitting

 For remote MicroKernel implementations and other cases where local caching of 
 content is needed it would be useful to have a NodeStore implementation that 
 maintains a simple in-memory or on-disk cache of frequently accessed content. 
 Such a NodeStore implementation could also be used to better isolate the 
 current caching logic behind uncommitted changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-167) Caching NodeStore implementation

2012-07-04 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406581#comment-13406581
 ] 

Stefan Guggisberg commented on OAK-167:
---

 Such a NodeStore implementation could also be used to better isolate the 
 current caching logic behind uncommitted changes.

wouldn't that ideally the transient space, i.e. oak-jcr?

 Caching NodeStore implementation
 

 Key: OAK-167
 URL: https://issues.apache.org/jira/browse/OAK-167
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: core
Reporter: Jukka Zitting

 For remote MicroKernel implementations and other cases where local caching of 
 content is needed it would be useful to have a NodeStore implementation that 
 maintains a simple in-memory or on-disk cache of frequently accessed content. 
 Such a NodeStore implementation could also be used to better isolate the 
 current caching logic behind uncommitted changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-114) MicroKernel API: specify retention policy for old revisions

2012-07-04 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13406609#comment-13406609
 ] 

Stefan Guggisberg commented on OAK-114:
---

 Would it be possible to change for at least 10 minutes to for at least 10 
 minutes since last access?

the 'extend lease model', apart from introducing complex state management 
requirements in the microkernel,  would e.g. allow misbehaved clients to 
compromise the stability of the mk. a client could force the mk to keep old 
revisions for ever and prevent vital gc cycles.

i therefore don't think that we should allow clients to (explicitly or 
implicitly) extend the life span
of a specific revision.

 MicroKernel API: specify retention policy for old revisions
 ---

 Key: OAK-114
 URL: https://issues.apache.org/jira/browse/OAK-114
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Attachments: OAK-114.patch


 the MicroKernel API javadoc should specify the minimal guaranteed retention 
 period for old revisions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (JCR-3368) Node#hasNode fails with RepositoryException because intermediate state cannot be retrieved when it should just return false

2012-07-01 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13404753#comment-13404753
 ] 

Stefan Guggisberg commented on JCR-3368:


if the intermediary node state can't be read due to insufficient access rights 
i agree that hasNode should return false; 
if it can't be read (although it is expected to exist) it is IMO correct and 
according to the spec that it throws an exception.

BTW: the stack trace in the issue description is probably caused by a bug in 
CachingHierarchyManager (just an educated guess)... 

 Node#hasNode fails with RepositoryException because intermediate state cannot 
 be retrieved when it should just return false
 ---

 Key: JCR-3368
 URL: https://issues.apache.org/jira/browse/JCR-3368
 Project: Jackrabbit Content Repository
  Issue Type: Bug
Affects Versions: 2.2.12, 2.4.2, 2.5
Reporter: Unico Hommes
 Attachments: HasNodeAfterRemoveTest.java


 See attached test case.
 You will see the following exception:
 javax.jcr.RepositoryException: failed to retrieve state of intermediary node
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:156)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolveNodePath(HierarchyManagerImpl.java:372)
 at org.apache.jackrabbit.core.NodeImpl.getNodeId(NodeImpl.java:276)
 at 
 org.apache.jackrabbit.core.NodeImpl.resolveRelativeNodePath(NodeImpl.java:223)
 at org.apache.jackrabbit.core.NodeImpl.hasNode(NodeImpl.java:2250)
 at 
 org.apache.jackrabbit.core.HasNodeAfterRemoveTest.testRemove(HasNodeAfterRemoveTest.java:14)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at 
 org.apache.jackrabbit.test.AbstractJCRTest.run(AbstractJCRTest.java:456)
 at junit.framework.TestSuite.runTest(TestSuite.java:243)
 at junit.framework.TestSuite.run(TestSuite.java:238)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
 at 
 org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127)
 at org.apache.maven.surefire.Surefire.run(Surefire.java:177)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:616)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009)
 Caused by: org.apache.jackrabbit.core.state.NoSuchItemStateException: 
 c7ccbcd3-0524-4d4d-a109-eae84627f94e
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getTransientItemState(SessionItemStateManager.java:304)
 at 
 org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:153)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.getItemState(HierarchyManagerImpl.java:152)
 at 
 org.apache.jackrabbit.core.HierarchyManagerImpl.resolvePath(HierarchyManagerImpl.java:115)
 at 
 org.apache.jackrabbit.core.CachingHierarchyManager.resolvePath(CachingHierarchyManager.java:152)
 ... 29 more
 I tried several things to fix this but didn't find a better solution than to 
 just wrap the statement
 NodeId id = resolveRelativeNodePath(relPath);
 in a try catch RepositoryException and return false when that exception 
 occurs.
 In particular I tried changing the implementation to
 Path path = resolveRelativePath(relPath).getNormalizedPath();
 return itemMgr.nodeExists(path);
 However, the repository 

[jira] [Updated] (OAK-114) MicroKernel API: specify retention policy for old revisions

2012-07-01 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-114:
--

Fix Version/s: 0.3
 Assignee: Stefan Guggisberg

 MicroKernel API: specify retention policy for old revisions
 ---

 Key: OAK-114
 URL: https://issues.apache.org/jira/browse/OAK-114
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.3


 the MicroKernel API javadoc should specify the minimal guaranteed retention 
 period for old revisions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (JCR-3366) incorrect handling of leading '{' in jcr names

2012-06-29 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created JCR-3366:
--

 Summary: incorrect handling of leading '{' in jcr names
 Key: JCR-3366
 URL: https://issues.apache.org/jira/browse/JCR-3366
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-spi-commons
Reporter: Stefan Guggisberg


the following code throws a (misleading) exception:

{code}session.getRootNode().addNode({foo);{code}

the root cause of the exception is that {foo is not correctly parsed by 

{code}o.a.j.spi.commons.name.PathParser{code}

{code}PathParser.checkFormat({foo){code} succeeds,
but {code}PathParser.parse({foo, resolver, factory){code} throws a 
{code}MalformedPathException: empty path{code}

{code}NameParser{code} OTOH seems to correctly handle non-expandedform names 
with leading '{'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (JCR-853) [PATCH] Jackrabbit disallows some nodetype changes which are in fact safe.

2012-06-19 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-853.
---

Resolution: Won't Fix

no progress in 4 years, resolving as won't fix

 [PATCH] Jackrabbit disallows some nodetype changes which are in fact safe.
 --

 Key: JCR-853
 URL: https://issues.apache.org/jira/browse/JCR-853
 Project: Jackrabbit Content Repository
  Issue Type: Improvement
  Components: jackrabbit-core, nodetype
Reporter: Simon Edwards
Assignee: Stefan Guggisberg
Priority: Minor
 Attachments: NodeTypeDefDiff_JCR-853.diff


 When reregistering nodetypes using 
 org.apache.jackrabbit.core.nodetype.NodeTypeRegistry.reregisterNodeType(), 
 Jackrabbit uses the NodeTypeDefDiff class to compare the old nodetype 
 definitions with the new definitions to determine if the nodetype changes can 
 be permitted. Changing a node property from not multiple to multiple is safe 
 and should be allowed, but there is a bug in NodeTypeDefDiff.java which 
 prevents this change from being permitted.
 The NodeTypeDefDiff..buildPropDefDiffs() tries to compare the old and new 
 version of each property definition. Around line 260, it tries to pull the 
 new definition of a property out of a map using the PropDefId object from the 
 old definition as a key. The problem is that this key is not only built up 
 from the name of the property, but also from its attributes (e.g. multiple, 
 type, constraints etc). This means that if the property definition changes, 
 then the new property definition will have a different PropDefId than the old 
 version and hence will not be found in the map. The code then treats the 
 missing property definition as being a major change that can't be permitted.
 see patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (JCR-3197) ItemNotFoundException while adding a NT_FOLDER node inside rootNode

2012-06-19 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3197.


Resolution: Cannot Reproduce

i cannot reproduce the issue. 

looking at the provided stacktrace i see that there's custom code involveld 
(econoinfo.*).

i assume the problem is specific to your deployment/environment. 

if you're still experiencing this issue please provide a generic test case 
which can be run with a plain jackrabbit.

 ItemNotFoundException while adding a NT_FOLDER node inside rootNode
 ---

 Key: JCR-3197
 URL: https://issues.apache.org/jira/browse/JCR-3197
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.3.3
 Environment: Linux Ubuntu, Glassfish 3.1.1, JackRabbitJCA 2.3.3.
Reporter: Gustavo Orair
Priority: Critical
  Labels: inconsistency
   Original Estimate: 72h
  Remaining Estimate: 72h

 I got an Strange ItemNotFoundException exception while adding a NT_FOLDER to 
 the root node.
 The code just calls:
   session.getRootNode().addNode(folderName, NodeType.NT_FOLDER);
 The addNode method failed with ItemNotFoundException.
 Relevant Stack trace:
 Caused by: javax.jcr.ItemNotFoundException?: 
 9ea3aca6-c1b8-4955-9ece-546fd12ba5f1
 at 
 org.apache.jackrabbit.core.ItemManager?.getItemData(ItemManager?.java:384) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at org.apache.jackrabbit.core.ItemManager?.getNode(ItemManager?.java:669) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at org.apache.jackrabbit.core.ItemManager?.getNode(ItemManager?.java:647) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at org.apache.jackrabbit.core.NodeImpl?.addNode(NodeImpl?.java:1286) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at 
 org.apache.jackrabbit.core.session.AddNodeOperation?.perform(AddNodeOperation?.java:111)
  ~[jackrabbit-core-2.3.3.jar:na]
 at 
 org.apache.jackrabbit.core.session.AddNodeOperation?.perform(AddNodeOperation?.java:37)
  ~[jackrabbit-core-2.3.3.jar:na]
 at 
 org.apache.jackrabbit.core.session.SessionState?.perform(SessionState?.java:216)
  ~[jackrabbit-core-2.3.3.jar:na]
 at org.apache.jackrabbit.core.ItemImpl?.perform(ItemImpl?.java:91) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at 
 org.apache.jackrabbit.core.NodeImpl?.addNodeWithUuid(NodeImpl?.java:1776) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at org.apache.jackrabbit.core.NodeImpl?.addNode(NodeImpl?.java:1736) 
 ~[jackrabbit-core-2.3.3.jar:na]
 at 
 econoinfo.commons.storage.impl.JCRStorage.obtemOuCriaPasta(JCRStorage.java:602)
  ~[SimpleStorageFacade-JcrStorage-2.2-SNAPSHOT.jar:na]
 at 
 econoinfo.commons.storage.impl.JCRStorage.obtemOuCriaPasta(JCRStorage.java:608)
  ~[SimpleStorageFacade-JcrStorage-2.2-SNAPSHOT.jar:na]
 at 
 econoinfo.commons.storage.impl.JCRStorage.obtemOuCriaPasta(JCRStorage.java:608)
  ~[SimpleStorageFacade-JcrStorage-2.2-SNAPSHOT.jar:na]
 at 
 econoinfo.commons.storage.impl.JCRStorage.obtemOuCriaPasta(JCRStorage.java:608)
  ~[SimpleStorageFacade-JcrStorage-2.2-SNAPSHOT.jar:na]
 at econoinfo.commons.storage.impl.JCRStorage.add(JCRStorage.java:882) 
 ~[SimpleStorageFacade-JcrStorage-2.2-SNAPSHOT.jar:na]
 at econoinfo.commons.storage.impl.JCRStorage.add(JCRStorage.java:400) 
 ~[SimpleStorageFacade-JcrStorage-2.2-SNAPSHOT.jar:na]
 ... 146 common frames omitted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (JCR-3246) RepositoryImpl attempts to close active session twice on shutdown

2012-06-19 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-3246.


Resolution: Cannot Reproduce

RepositoryImpl#activeSessions only contains user-sessions (created through 
RepositoryImpl#createSession.

the WorkspaceInfo#systemSession instance OTOH is instantiated lazily in 
WorkspaceInfo#getSystemSession and is *not* added to the 
RepositoryImpl#activeSessions collection.

the observed behavior is most likely caused by custom code (e.g. derived from 
RepositoryImpl).

 RepositoryImpl attempts to close active session twice on shutdown
 -

 Key: JCR-3246
 URL: https://issues.apache.org/jira/browse/JCR-3246
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: jackrabbit-core
Affects Versions: 2.4
Reporter: Jan Haderka
Priority: Critical

 On shutdown sessions are being closed twice which lead to the exception being 
 logged as shown below. As far as I can tell {{RepositoryImpl}} has system 
 sessions in the list of active sessions which is why it tries to close them 
 twice - first time when closing all active sessions 
 {{(RepositoryImpl.java:1078)}} and second time when disposing workspace 
 {{(RepositoryImpl.java:1090)}}.
 {noformat}
 2012-02-28 10:36:00,614 WARN  org.apache.jackrabbit.core.session.SessionState 
   : Attempt to close session-31 after it has already been closed. Please 
 review your code for proper session management.
 java.lang.Exception: Stack trace of the duplicate attempt to close session-31
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:280)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.security.user.UserManagerImpl.loggedOut(UserManagerImpl.java:1115)
   at 
 org.apache.jackrabbit.core.SessionImpl.notifyLoggedOut(SessionImpl.java:565)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:979)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.doDispose(RepositoryImpl.java:2200)
   at 
 org.apache.jackrabbit.core.RepositoryImpl$WorkspaceInfo.dispose(RepositoryImpl.java:2154)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1090)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 2012-02-28 10:36:00,617 WARN  org.apache.jackrabbit.core.session.SessionState 
   : session-31 has already been closed. See the attached exception for a 
 trace of where this session was closed.
 java.lang.Exception: Stack trace of  where session-31 was originally closed
   at 
 org.apache.jackrabbit.core.session.SessionState.close(SessionState.java:275)
   at org.apache.jackrabbit.core.SessionImpl.logout(SessionImpl.java:943)
   at 
 org.apache.jackrabbit.core.XASessionImpl.logout(XASessionImpl.java:392)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.doShutdown(RepositoryImpl.java:1078)
   at 
 org.apache.jackrabbit.core.RepositoryImpl.shutdown(RepositoryImpl.java:1041)
   at 
 org.apache.jackrabbit.core.jndi.BindableRepository.shutdown(BindableRepository.java:259)
   at 
 org.apache.jackrabbit.core.jndi.RegistryHelper.unregisterRepository(RegistryHelper.java:94)
...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-142) MicroKernel API: returning the :hash should be optional

2012-06-15 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-142:
-

 Summary: MicroKernel API: returning the :hash should be optional
 Key: OAK-142
 URL: https://issues.apache.org/jira/browse/OAK-142
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Stefan Guggisberg


the {{:hash}} property represents the content hash of node tree rooted at the 
property's parent node. 

the {{:hash}} property is by default not included in the json tree returned by 
the {{getNodes(..)}} method but can be enabled by explicitly specifying it in 
the {{filter}} parameter.

returning the {{:hash}} property should be optional since it might be a too 
heavy requirement on some MicroKernel implementations. an implementation might 
e.g. choose to include the {{:hash}} property only on certain nodes or it might 
choose to not support it at all.

if however a {{:hash}} property is returned it has to obey the content hash 
contract, i.e. identical node trees must have identical {{:hash}} values and 
non-identical node trees must have different {{:hash}} values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-142) MicroKernel API: returning the :hash property should be optional

2012-06-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-142:
--

Summary: MicroKernel API: returning the :hash property should be optional  
(was: MicroKernel API: returning the :hash should be optional)

 MicroKernel API: returning the :hash property should be optional
 

 Key: OAK-142
 URL: https://issues.apache.org/jira/browse/OAK-142
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Stefan Guggisberg

 the {{:hash}} property represents the content hash of node tree rooted at the 
 property's parent node. 
 the {{:hash}} property is by default not included in the json tree returned 
 by the {{getNodes(..)}} method but can be enabled by explicitly specifying it 
 in the {{filter}} parameter.
 returning the {{:hash}} property should be optional since it might be a too 
 heavy requirement on some MicroKernel implementations. an implementation 
 might e.g. choose to include the {{:hash}} property only on certain nodes or 
 it might choose to not support it at all.
 if however a {{:hash}} property is returned it has to obey the content hash 
 contract, i.e. identical node trees must have identical {{:hash}} values and 
 non-identical node trees must have different {{:hash}} values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-142) MicroKernel API: returning the :hash property should be optional

2012-06-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-142.
---

   Resolution: Fixed
Fix Version/s: 0.3
 Assignee: Stefan Guggisberg

fixed in svn r1350649

 MicroKernel API: returning the :hash property should be optional
 

 Key: OAK-142
 URL: https://issues.apache.org/jira/browse/OAK-142
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.3


 the {{:hash}} property represents the content hash of node tree rooted at the 
 property's parent node. 
 the {{:hash}} property is by default not included in the json tree returned 
 by the {{getNodes(..)}} method but can be enabled by explicitly specifying it 
 in the {{filter}} parameter.
 returning the {{:hash}} property should be optional since it might be a too 
 heavy requirement on some MicroKernel implementations. an implementation 
 might e.g. choose to include the {{:hash}} property only on certain nodes or 
 it might choose to not support it at all.
 if however a {{:hash}} property is returned it has to obey the content hash 
 contract, i.e. identical node trees must have identical {{:hash}} values and 
 non-identical node trees must have different {{:hash}} values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-142) MicroKernel API: returning the :hash property should be optional

2012-06-15 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-142:
--

Component/s: mk

 MicroKernel API: returning the :hash property should be optional
 

 Key: OAK-142
 URL: https://issues.apache.org/jira/browse/OAK-142
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.3


 the {{:hash}} property represents the content hash of node tree rooted at the 
 property's parent node. 
 the {{:hash}} property is by default not included in the json tree returned 
 by the {{getNodes(..)}} method but can be enabled by explicitly specifying it 
 in the {{filter}} parameter.
 returning the {{:hash}} property should be optional since it might be a too 
 heavy requirement on some MicroKernel implementations. an implementation 
 might e.g. choose to include the {{:hash}} property only on certain nodes or 
 it might choose to not support it at all.
 if however a {{:hash}} property is returned it has to obey the content hash 
 contract, i.e. identical node trees must have identical {{:hash}} values and 
 non-identical node trees must have different {{:hash}} values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-130) Unexpected result of MicroKernel#getJournal after MicroKernel#merge

2012-06-14 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-130.
---

   Resolution: Fixed
Fix Version/s: 0.3

fixed in svn r1350198

 Unexpected result of MicroKernel#getJournal after MicroKernel#merge
 ---

 Key: OAK-130
 URL: https://issues.apache.org/jira/browse/OAK-130
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Thomas Mueller
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.3


 There is an unexpected behavior for the following test case:
 String branch = mk.branch(head);
 branch = mk.commit(/, +\branch\: {}, branch, );
 head = mk.commit(/, +\head\: {}, head, );
 head = mk.merge(branch, );
 String n = mk.getNodes(/, head, 10, 0, 10, );
 String j = mk.getJournal(first, head, /);
 getNodes returns both nodes (branch and head), but
 getJournal returns +/head:{}, /head:/branch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-138) Move client/server package in oak-mk to separate project

2012-06-13 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13294307#comment-13294307
 ] 

Stefan Guggisberg commented on OAK-138:
---

 It also contains the MicroKernel API itself. As long as we don't want to 
 split the API to a separate component, things like remoting IMHO fit best 
 within that same component.

my understanding was that we'll need to split the API to a separate component 
anyway sooner or later.
with the current setup alternative mk implementations require an explicit 
dependency on the 'default' implementation in oak-mk, including all transitive 
dependencies such as h2 etc. i find that weird.

IMHO an alternative mk implementation should only require a dependency on the 
MicroKernel API.

therefore, 
+1 for oak-mk-remote
+1 for oak-mk-api 

 Move client/server package in oak-mk to separate project
 

 Key: OAK-138
 URL: https://issues.apache.org/jira/browse/OAK-138
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, it, mk, run
Affects Versions: 0.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 As a further cleanup step in OAK-13, I'd like to move the packages 
 o.a.j.mk.client and o.a.j.mk.server and referenced classes in oak-mk to a 
 separate project, e.g. oak-mk-remote.
 This new project will then be added as a dependency to:
 oak-core
 oak-run
 oak-it-mk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-130) Unexpected result of MicroKernel#getJournal after MicroKernel#merge

2012-06-13 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-130:
--

Summary: Unexpected result of MicroKernel#getJournal after 
MicroKernel#merge  (was: Unexpected behavior of MicroKernelImpl.merge)

 Unexpected result of MicroKernel#getJournal after MicroKernel#merge
 ---

 Key: OAK-130
 URL: https://issues.apache.org/jira/browse/OAK-130
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Reporter: Thomas Mueller
Assignee: Stefan Guggisberg
Priority: Minor

 There is an unexpected behavior for the following test case:
 String branch = mk.branch(head);
 branch = mk.commit(/, +\branch\: {}, branch, );
 head = mk.commit(/, +\head\: {}, head, );
 head = mk.merge(branch, );
 String n = mk.getNodes(/, head, 10, 0, 10, );
 String j = mk.getJournal(first, head, /);
 getNodes returns both nodes (branch and head), but
 getJournal returns +/head:{}, /head:/branch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (JCR-3337) Icreated two node ,they are brother.when i select one to execute serching sql,it will throw a NullPointException

2012-06-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3337:
---

Component/s: query
   Priority: Major  (was: Critical)

please provide a simple test case which can be run against a standard 
jackrabbit. 

 Icreated two node ,they are brother.when i select one to execute serching 
 sql,it will throw a NullPointException
 

 Key: JCR-3337
 URL: https://issues.apache.org/jira/browse/JCR-3337
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: query
Affects Versions: 2.4.1, 2.5
 Environment: window+mysql+jdk1.6
Reporter: zfk
  Labels: jackrabbit
   Original Estimate: 276h
  Remaining Estimate: 276h

 I created two brother-node in system,for example: test、test2, when i selected 
 one and executed sql serching as: select a.* from [nt:base] as a where not 
 isdescendantnode(a,'/test/fff')  , the system will throw a 
 java.lang.NullPointerException:at 
 org.apache.jackrabbit.core.query.lucene.NotQuery$NotQueryScorer.nextDoc(NotQuery.java:191)
 at 
 org.apache.lucene.search.ConjunctionScorer.init(ConjunctionScorer.java:42)
   at 
 org.apache.lucene.search.ConjunctionScorer.init(ConjunctionScorer.java:33)
   at 
 org.apache.lucene.search.BooleanScorer2$2.init(BooleanScorer2.java:178)
   at 
 org.apache.lucene.search.BooleanScorer2.countingConjunctionSumScorer(BooleanScorer2.java:173)
   at 
 org.apache.lucene.search.BooleanScorer2.makeCountingSumScorerSomeReq(BooleanScorer2.java:230)
   at 
 org.apache.lucene.search.BooleanScorer2.makeCountingSumScorer(BooleanScorer2.java:208)
   at 
 org.apache.lucene.search.BooleanScorer2.init(BooleanScorer2.java:101)
   at 
 org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java:336)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:210)
   at org.apache.lucene.search.Searcher.search(Searcher.java:67)
   at 
 org.apache.jackrabbit.core.query.lucene.SortedLuceneQueryHits.getHits(SortedLuceneQueryHits.java:156)
   at 
 org.apache.jackrabbit.core.query.lucene.SortedLuceneQueryHits.init(SortedLuceneQueryHits.java:113)
   at 
 org.apache.jackrabbit.core.query.lucene.JackrabbitIndexSearcher.evaluate(JackrabbitIndexSearcher.java:109)
   at 
 org.apache.jackrabbit.core.query.lucene.LuceneQueryFactory.execute(LuceneQueryFactory.java:219)
   at 
 org.apache.jackrabbit.core.query.lucene.join.QueryEngine.execute(QueryEngine.java:465)
   at 
 org.apache.jackrabbit.core.query.lucene.join.QueryEngine.execute(QueryEngine.java:126)
   at 
 org.apache.jackrabbit.core.query.lucene.join.QueryEngine.execute(QueryEngine.java:115)
   at 
 org.apache.jackrabbit.core.query.QueryObjectModelImpl$2.perform(QueryObjectModelImpl.java:129)
   at 
 org.apache.jackrabbit.core.query.QueryObjectModelImpl$2.perform(QueryObjectModelImpl.java:124)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
   at 
 org.apache.jackrabbit.core.query.QueryObjectModelImpl.execute(QueryObjectModelImpl.java:123)
   at 
 com.gsoft.core.session.GcrSessionHelperImpl.query(GcrSessionHelperImpl.java:99)
   at com.gsoft.gcr.cms.dao.query.QueryDao.query(QueryDao.java:26)
   at 
 com.gsoft.gcr.cms.biz.impl.CmsFilterBizImpl.query(CmsFilterBizImpl.java:31)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
   at 
 org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
   at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
   at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
   at $Proxy6.query(Unknown Source)
   at 
 com.gsoft.gcr.cms.action.CmsFilterAction.query(CmsFilterAction.java:117)

[jira] [Updated] (JCR-3337) Icreated two node ,they are brother.when i select one to execute serching sql,it will throw a NullPointException

2012-06-12 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-3337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-3337:
---

Component/s: (was: jackrabbit-api)

 Icreated two node ,they are brother.when i select one to execute serching 
 sql,it will throw a NullPointException
 

 Key: JCR-3337
 URL: https://issues.apache.org/jira/browse/JCR-3337
 Project: Jackrabbit Content Repository
  Issue Type: Bug
  Components: query
Affects Versions: 2.4.1, 2.5
 Environment: window+mysql+jdk1.6
Reporter: zfk
  Labels: jackrabbit
   Original Estimate: 276h
  Remaining Estimate: 276h

 I created two brother-node in system,for example: test、test2, when i selected 
 one and executed sql serching as: select a.* from [nt:base] as a where not 
 isdescendantnode(a,'/test/fff')  , the system will throw a 
 java.lang.NullPointerException:at 
 org.apache.jackrabbit.core.query.lucene.NotQuery$NotQueryScorer.nextDoc(NotQuery.java:191)
 at 
 org.apache.lucene.search.ConjunctionScorer.init(ConjunctionScorer.java:42)
   at 
 org.apache.lucene.search.ConjunctionScorer.init(ConjunctionScorer.java:33)
   at 
 org.apache.lucene.search.BooleanScorer2$2.init(BooleanScorer2.java:178)
   at 
 org.apache.lucene.search.BooleanScorer2.countingConjunctionSumScorer(BooleanScorer2.java:173)
   at 
 org.apache.lucene.search.BooleanScorer2.makeCountingSumScorerSomeReq(BooleanScorer2.java:230)
   at 
 org.apache.lucene.search.BooleanScorer2.makeCountingSumScorer(BooleanScorer2.java:208)
   at 
 org.apache.lucene.search.BooleanScorer2.init(BooleanScorer2.java:101)
   at 
 org.apache.lucene.search.BooleanQuery$BooleanWeight.scorer(BooleanQuery.java:336)
   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:210)
   at org.apache.lucene.search.Searcher.search(Searcher.java:67)
   at 
 org.apache.jackrabbit.core.query.lucene.SortedLuceneQueryHits.getHits(SortedLuceneQueryHits.java:156)
   at 
 org.apache.jackrabbit.core.query.lucene.SortedLuceneQueryHits.init(SortedLuceneQueryHits.java:113)
   at 
 org.apache.jackrabbit.core.query.lucene.JackrabbitIndexSearcher.evaluate(JackrabbitIndexSearcher.java:109)
   at 
 org.apache.jackrabbit.core.query.lucene.LuceneQueryFactory.execute(LuceneQueryFactory.java:219)
   at 
 org.apache.jackrabbit.core.query.lucene.join.QueryEngine.execute(QueryEngine.java:465)
   at 
 org.apache.jackrabbit.core.query.lucene.join.QueryEngine.execute(QueryEngine.java:126)
   at 
 org.apache.jackrabbit.core.query.lucene.join.QueryEngine.execute(QueryEngine.java:115)
   at 
 org.apache.jackrabbit.core.query.QueryObjectModelImpl$2.perform(QueryObjectModelImpl.java:129)
   at 
 org.apache.jackrabbit.core.query.QueryObjectModelImpl$2.perform(QueryObjectModelImpl.java:124)
   at 
 org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
   at 
 org.apache.jackrabbit.core.query.QueryObjectModelImpl.execute(QueryObjectModelImpl.java:123)
   at 
 com.gsoft.core.session.GcrSessionHelperImpl.query(GcrSessionHelperImpl.java:99)
   at com.gsoft.gcr.cms.dao.query.QueryDao.query(QueryDao.java:26)
   at 
 com.gsoft.gcr.cms.biz.impl.CmsFilterBizImpl.query(CmsFilterBizImpl.java:31)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
   at 
 org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
   at 
 org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
   at 
 org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
   at 
 org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
   at $Proxy6.query(Unknown Source)
   at 
 com.gsoft.gcr.cms.action.CmsFilterAction.query(CmsFilterAction.java:117)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 

[jira] [Commented] (OAK-138) Move client/server package in oak-mk to separate project

2012-06-12 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13293666#comment-13293666
 ] 

Stefan Guggisberg commented on OAK-138:
---

 I'd rather just put the code to oak-mk as it's tightly bound to the 
 MicroKernel interface.

oak-mk contains one specific implementation of the MicroKernel API. 

the remoting code OTOH is generic and can be used with any mk implementation. 
i'd prefer a separate module for the remoting layer instead of putting 
everything mk-related into the current oak-mk module.

 Move client/server package in oak-mk to separate project
 

 Key: OAK-138
 URL: https://issues.apache.org/jira/browse/OAK-138
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, it, mk, run
Affects Versions: 0.3
Reporter: Dominique Pfister
Assignee: Dominique Pfister

 As a further cleanup step in OAK-13, I'd like to move the packages 
 o.a.j.mk.client and o.a.j.mk.server and referenced classes in oak-mk to a 
 separate project, e.g. oak-mk-remote.
 This new project will then be added as a dependency to:
 oak-core
 oak-run
 oak-it-mk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-32) Drop MicroKernel.dispose()

2012-06-01 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-32.
--

   Resolution: Fixed
Fix Version/s: 0.3
 Assignee: Stefan Guggisberg

fixed as proposed in svn r1345141

the MicroKernelFactory issue (who should MicroKernel instances ideally be 
created) has not been adressed. 

 Drop MicroKernel.dispose()
 --

 Key: OAK-32
 URL: https://issues.apache.org/jira/browse/OAK-32
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Jukka Zitting
Assignee: Stefan Guggisberg
 Fix For: 0.3

 Attachments: OAK-32.patch


 Just like a client of the MicroKernel interface doesn't know how a MK 
 instance is created, there should not be a need for a client to be able to 
 dispose an instance. For example the lifecycle of a MK instance running as an 
 OSGi service (or any other component framework) is managed by the framework, 
 not by clients. Thus I suggest that the MicroKernel.dispose() method is 
 removed.
 The only piece of code that's notably affected by this change is the 
 MicroKernelFactory class still in oak-core and any client code that uses it 
 to construct new MicroKernel instances. I think we should replace the MKF 
 class with a more generic solution as outlined in OAK-17.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-32) Drop MicroKernel.dispose()

2012-06-01 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-32?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287381#comment-13287381
 ] 

Stefan Guggisberg edited comment on OAK-32 at 6/1/12 1:14 PM:
--

fixed as proposed in svn r1345141

the MicroKernelFactory issue (who should MicroKernel instances ideally be 
created) has not been addressed. 

  was (Author: stefan@jira):
fixed as proposed in svn r1345141

the MicroKernelFactory issue (who should MicroKernel instances ideally be 
created) has not been adressed. 
  
 Drop MicroKernel.dispose()
 --

 Key: OAK-32
 URL: https://issues.apache.org/jira/browse/OAK-32
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Jukka Zitting
Assignee: Stefan Guggisberg
 Fix For: 0.3

 Attachments: OAK-32.patch


 Just like a client of the MicroKernel interface doesn't know how a MK 
 instance is created, there should not be a need for a client to be able to 
 dispose an instance. For example the lifecycle of a MK instance running as an 
 OSGi service (or any other component framework) is managed by the framework, 
 not by clients. Thus I suggest that the MicroKernel.dispose() method is 
 removed.
 The only piece of code that's notably affected by this change is the 
 MicroKernelFactory class still in oak-core and any client code that uses it 
 to construct new MicroKernel instances. I think we should replace the MKF 
 class with a more generic solution as outlined in OAK-17.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-32) Drop MicroKernel.dispose()

2012-06-01 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-32?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287381#comment-13287381
 ] 

Stefan Guggisberg edited comment on OAK-32 at 6/1/12 2:09 PM:
--

fixed as proposed in svn r1345141

the MicroKernelFactory issue (i.e. how should MicroKernel instances ideally be 
created) has not been addressed. 

  was (Author: stefan@jira):
fixed as proposed in svn r1345141

the MicroKernelFactory issue (who should MicroKernel instances ideally be 
created) has not been addressed. 
  
 Drop MicroKernel.dispose()
 --

 Key: OAK-32
 URL: https://issues.apache.org/jira/browse/OAK-32
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Jukka Zitting
Assignee: Stefan Guggisberg
 Fix For: 0.3

 Attachments: OAK-32.patch


 Just like a client of the MicroKernel interface doesn't know how a MK 
 instance is created, there should not be a need for a client to be able to 
 dispose an instance. For example the lifecycle of a MK instance running as an 
 OSGi service (or any other component framework) is managed by the framework, 
 not by clients. Thus I suggest that the MicroKernel.dispose() method is 
 removed.
 The only piece of code that's notably affected by this change is the 
 MicroKernelFactory class still in oak-core and any client code that uses it 
 to construct new MicroKernel instances. I think we should replace the MKF 
 class with a more generic solution as outlined in OAK-17.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-126) remove unused code

2012-06-01 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-126:
-

 Summary: remove unused code
 Key: OAK-126
 URL: https://issues.apache.org/jira/browse/OAK-126
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Priority: Minor


in oak-mk there are a couple of classes that are (currently) not used.

e.g.

o.a.jackrabbit.mk.persistence.BDbPersistence
o.a.jackrabbit.mk.persistence.FSPersistence
o.a.jackrabbit.mk.persistence.MongoPersistence
o.a.jackrabbit.mk.fs.*
o.a.jackrabbit.mk.blobs.MongoBlobStore

in the interest of a leaner code base
and clearer structure i'd like to 
remove such unused code. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-56) File system abstraction

2012-06-01 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-56?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-56.
--

Resolution: Won't Fix

resolving as won't fix for now, as discussed with thomas off-list.

we will revisit this topic once we have a concrete use case/need for a file 
system abstraction in oak-mk.

 

 File system abstraction
 ---

 Key: OAK-56
 URL: https://issues.apache.org/jira/browse/OAK-56
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: core, mk
Reporter: Thomas Mueller
Assignee: Thomas Mueller
Priority: Minor

 A file system abstraction allows to add new features (cross cutting concerns) 
 in a modular way, for example:
 - detection and special behavior of out-of-disk space situation
 - profiling and statistics over JMX
 - re-try on file system problems
 - encryption
 - file system monitoring
 - replication / real-time backup on the file system level (for clustering)
 - caching (improved performance for CRX)
 - allows to easily switch to faster file system APIs (FileChannel, memory 
 mapped files)
 - debugging (for example, logging all file system operations)
 - allows to implement s3 / hadoop / mongodb / ... file systems - not only by 
 us but from 3th party, possibly the end user
 - zip file system (for example to support read-only, compressed repositories)
 - testing: simulating out of disk space and out of memory (ensure the 
 repository doesn't corrupt in this case)
 - testing: simulate very large files (using an in-memory file system)
 - splitting very large files in 2 gb blocks (FAT and other file systems that 
 don't support large files)
 - data compression (if needed)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-13) Cleanup org.apache.jackrabbit.mk

2012-06-01 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-13?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-13:
-

Component/s: (was: mk)
 core

 Cleanup org.apache.jackrabbit.mk
 

 Key: OAK-13
 URL: https://issues.apache.org/jira/browse/OAK-13
 Project: Jackrabbit Oak
  Issue Type: Task
  Components: core
Reporter: angela

 imo we should clean up the org.apache.jackrabbit.mk.* packages that are 
 currently
 located in the oak-core module.
 for me it is really hard to destinguish between code that is really part of 
 the
 productive code base for the oak project and code that is purely experimental
 chunk and leftover.
 in order to become familiar with the code that would be helpful for me and
 anybody else that will contribute to the project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-32) Drop MicroKernel.dispose()

2012-05-29 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-32:
-

Attachment: OAK-32.patch

proposed patch

 Drop MicroKernel.dispose()
 --

 Key: OAK-32
 URL: https://issues.apache.org/jira/browse/OAK-32
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Jukka Zitting
 Attachments: OAK-32.patch


 Just like a client of the MicroKernel interface doesn't know how a MK 
 instance is created, there should not be a need for a client to be able to 
 dispose an instance. For example the lifecycle of a MK instance running as an 
 OSGi service (or any other component framework) is managed by the framework, 
 not by clients. Thus I suggest that the MicroKernel.dispose() method is 
 removed.
 The only piece of code that's notably affected by this change is the 
 MicroKernelFactory class still in oak-core and any client code that uses it 
 to construct new MicroKernel instances. I think we should replace the MKF 
 class with a more generic solution as outlined in OAK-17.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-117) MicroKernel API: add delete(String blobId) method

2012-05-29 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-117:
-

 Summary: MicroKernel API: add delete(String blobId) method
 Key: OAK-117
 URL: https://issues.apache.org/jira/browse/OAK-117
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Stefan Guggisberg


the MicroKernel API does provide methods for writing and reading binary data;

but there's no method to delete binary data.

i suggest we add the following method:

boolean delete(String blobId) throws MicroKernelException;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (OAK-117) MicroKernel API: add delete(String blobId) method

2012-05-29 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned OAK-117:
-

Assignee: Stefan Guggisberg

 MicroKernel API: add delete(String blobId) method
 -

 Key: OAK-117
 URL: https://issues.apache.org/jira/browse/OAK-117
 Project: Jackrabbit Oak
  Issue Type: Improvement
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg

 the MicroKernel API does provide methods for writing and reading binary data;
 but there's no method to delete binary data.
 i suggest we add the following method:
 boolean delete(String blobId) throws MicroKernelException;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-113) drop MicroKernel getNodes(String, String) convenience signature

2012-05-25 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-113.
---

Resolution: Fixed

fixed in svn r1342597

 drop MicroKernel getNodes(String, String) convenience signature
 ---

 Key: OAK-113
 URL: https://issues.apache.org/jira/browse/OAK-113
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.3


 the MK API provides two getNodes signatures:
 - {{getNodes(String, String)}}
 - {{getNodes(String, String, int, long, int, String)}}
 the former is a convenience method and equivalent to 
 {{getNodes(path, revisionId, 1, 0, -1, null)}}.
 it's currently only used in test cases. in order to 
 streamline the API and the javadoc i suggest to drop it. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-116) MicroKernel API: clarify semantics of getNodes depth, offset and count parameters

2012-05-25 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-116.
---

Resolution: Fixed

fixed in svn r1342796

 MicroKernel API: clarify semantics of getNodes depth, offset and count 
 parameters
 -

 Key: OAK-116
 URL: https://issues.apache.org/jira/browse/OAK-116
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.3


 {{MicroKernel.getNode(String path, String revisionId, int depth, long offset, 
 int count, String filter)}}
 the semantics of {{depth}} as currently documented in the javadoc is 
 inconsistent: 
 - depth=0 returns empty child node objects
 - depth=1 OTOH doesn't return empty grand children objects 
 the amount of information returned on the deepest level should IMO be 
 independent of the depth value. 
 {{count}} as currently documented is only applied to the root of the returned 
 subtree. this would imply that the implementation has to always return *all* 
 child nodes on deeper levels, even for potentially very large child node sets.
 i suggest we rename {{count}} to {{maxChildNodes}} and apply it on every 
 level of the returned subtree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-25 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-75.
--

   Resolution: Fixed
Fix Version/s: 0.3

fixed in svn r1342796

 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.3

 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-113) drop MicroKernel getNodes(String, String) convenience signature

2012-05-24 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-113:
-

 Summary: drop MicroKernel getNodes(String, String) convenience 
signature
 Key: OAK-113
 URL: https://issues.apache.org/jira/browse/OAK-113
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
Priority: Minor
 Fix For: 0.3


the MK API provides two getNodes signatures:

- {{getNodes(String, String)}}
- {{getNodes(String, String, int, long, int, String)}}

the former is a convenience method and equivalent to 
{{getNodes(path, revisionId, 1, 0, -1, null)}}.

it's currently only used in test cases. in order to 
streamline the API and the javadoc i suggest to drop it. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (OAK-114) MicroKernel API: specify retention policy for old revisions

2012-05-24 Thread Stefan Guggisberg (JIRA)
Stefan Guggisberg created OAK-114:
-

 Summary: MicroKernel API: specify retention policy for old 
revisions
 Key: OAK-114
 URL: https://issues.apache.org/jira/browse/OAK-114
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg


the MicroKernel API javadoc should specify the minimal guaranteed retention 
period for old revisions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (OAK-116) MicroKernel API: clarify semantics of getNodes depth, offset and count parameters

2012-05-24 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg reassigned OAK-116:
-

Assignee: Stefan Guggisberg

 MicroKernel API: clarify semantics of getNodes depth, offset and count 
 parameters
 -

 Key: OAK-116
 URL: https://issues.apache.org/jira/browse/OAK-116
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Stefan Guggisberg
Assignee: Stefan Guggisberg
 Fix For: 0.3


 {{MicroKernel.getNode(String path, String revisionId, int depth, long offset, 
 int count, String filter)}}
 the semantics of {{depth}} as currently documented in the javadoc is 
 inconsistent: 
 - depth=0 returns empty child node objects
 - depth=1 OTOH doesn't return empty grand children objects 
 the amount of information returned on the deepest level should IMO be 
 independent of the depth value. 
 {{count}} as currently documented is only applied to the root of the returned 
 subtree. this would imply that the implementation has to always return *all* 
 child nodes on deeper levels, even for potentially very large child node sets.
 i suggest we rename {{count}} to {{maxChildNodes}} and apply it on every 
 level of the returned subtree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13281630#comment-13281630
 ] 

Stefan Guggisberg commented on OAK-75:
--

added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since {{{\}}} (backslash) needs 
to be escaped in json (leading to pretty awkward java strings such as e.g. 
nodes:[\foo*\]) i've chosen to require the literal * to be 
escaped instead of the wildcard. 

{{{ 
Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 
}}}

 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13281630#comment-13281630
 ] 

Stefan Guggisberg edited comment on OAK-75 at 5/23/12 2:38 PM:
---

added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since {noformat}\{noformat} 
(backslash) needs to be escaped in json (leading to pretty awkward java strings 
such as e.g. {noformat}{nodes:[\foo*\]}}{noformat}) i've chosen to 
require the literal * to be escaped instead of the wildcard. 

{noformat} 
Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 
{noformat} 

  was (Author: stefan@jira):
added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since {{{\}}} (backslash) needs 
to be escaped in json (leading to pretty awkward java strings such as e.g. 
nodes:[\foo*\]) i've chosen to require the literal * to be 
escaped instead of the wildcard. 

{{{ 
Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 
}}}
  
 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13281630#comment-13281630
 ] 

Stefan Guggisberg edited comment on OAK-75 at 5/23/12 2:40 PM:
---

{noformat} 
added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since \ (backslash) needs to be 
escaped in json (leading to pretty awkward java strings such as e.g. 
{nodes:[\foo*\]}}) i've chosen to require the literal * to be escaped 
instead of the wildcard. 

Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 


  was (Author: stefan@jira):
added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since {noformat}\{noformat} 
(backslash) needs to be escaped in json (leading to pretty awkward java strings 
such as e.g. {noformat}{nodes:[\foo*\]}}{noformat}) i've chosen to 
require the literal * to be escaped instead of the wildcard. 

{noformat} 
Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 
{noformat} 
  
 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13281630#comment-13281630
 ] 

Stefan Guggisberg edited comment on OAK-75 at 5/23/12 2:41 PM:
---

{noformat} 
added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since \ (backslash) needs to be 
escaped in json (leading to pretty awkward java strings such as e.g. 
{nodes:[\foo*\]}}) i've chosen to require the literal * to be escaped 
instead of the wildcard. 

Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 
{noformat} 

  was (Author: stefan@jira):
{noformat} 
added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since \ (backslash) needs to be 
escaped in json (leading to pretty awkward java strings such as e.g. 
{nodes:[\foo*\]}}) i've chosen to require the literal * to be escaped 
instead of the wildcard. 

Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 

  
 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-23 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13281630#comment-13281630
 ] 

Stefan Guggisberg edited comment on OAK-75 at 5/23/12 2:43 PM:
---

{noformat} 
added initial support for glob-based getNodes filter 
in svn r1341873.

the format is as suggested by michael. however, since \ (backslash) 
needs to be escaped in json (leading to pretty awkward java strings 
such as e.g. {nodes:[\foo*\]}}) i've chosen to require the 
literal * to be escaped instead of the wildcard. 

Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; 
all others are considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) 
if it should be interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring 
in the target name. 
* (asterisk) occurrences within the glob to be interpreted as literals 
must be escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match 
but none of the exclusion patterns. 
{noformat} 

  was (Author: stefan@jira):
{noformat} 
added initial support for glob-based getNodes filter in svn r1341873.

the format is as suggested by michael. however, since \ (backslash) needs to be 
escaped in json (leading to pretty awkward java strings such as e.g. 
{nodes:[\foo*\]}}) i've chosen to require the literal * to be escaped 
instead of the wildcard. 

Glob Syntax: 

a nodes or properties filter consists of one or more globs. 
a glob prefixed by - (dash) is treated as an exclusion pattern; all others are 
considered inclusion patterns. 
a leading - (dash) must be escaped by prepending \ (backslash) if it should be 
interpreted as a literal. 
* (asterisk) serves as a wildcard, i.e. it matches any substring in the target 
name. 
* (asterisk) occurrences within the glob to be interpreted as literals must be 
escaped by prepending \ (backslash). 
a filter matches a target name if any of the inclusion patterns match but none 
of the exclusion patterns. 
{noformat} 
  
 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-45) Add support for branching and merging of private copies to MicroKernel

2012-05-23 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-45.
--

   Resolution: Fixed
Fix Version/s: 0.3

there's javadoc, a basic integration test in MicroKernelIT
and an implementation in oak-mk; resolving as fixed

 Add support for branching and merging of private copies to MicroKernel
 --

 Key: OAK-45
 URL: https://issues.apache.org/jira/browse/OAK-45
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Michael Dürig
Assignee: Stefan Guggisberg
 Fix For: 0.3

 Attachments: OAK-45__OOME.patch


 As discussed on the dev list [1] we should add support to the Microkernel for 
 branching of a private working copy which can be merged back later:
 {code}
 String addLotsOfData(MicroKernel mk) { 
 String baseRevision = mk.getHeadRevision(); 
 String branchRevision = mk.branch(baseRevision); 
 for (int i = 0; i  100; i++) { 
 branchRevision = mk.commit(/, +\node + i + \:{}, 
 branchRevision, null); 
 } 
 return mk.merge(branchRevision, baseRevision); } 
 {code}
 [1] http://markmail.org/message/jbbut6vzvmmjqonr

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-12) Implement a test suite for the MicroKernel

2012-05-23 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-12?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-12.
--

   Resolution: Fixed
Fix Version/s: 0.3

MicroKernelIT covers all MicroKernel API methods.

resolving as fixed

 Implement a test suite for the MicroKernel
 --

 Key: OAK-12
 URL: https://issues.apache.org/jira/browse/OAK-12
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Michael Dürig
Assignee: Stefan Guggisberg
  Labels: test
 Fix For: 0.3


 We should have a test suite which thourougly covers the contract of the 
 MicroKernel API

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-32) Drop MicroKernel.dispose()

2012-05-23 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-32?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-32:
-

Component/s: mk

 Drop MicroKernel.dispose()
 --

 Key: OAK-32
 URL: https://issues.apache.org/jira/browse/OAK-32
 Project: Jackrabbit Oak
  Issue Type: Improvement
  Components: mk
Reporter: Jukka Zitting

 Just like a client of the MicroKernel interface doesn't know how a MK 
 instance is created, there should not be a need for a client to be able to 
 dispose an instance. For example the lifecycle of a MK instance running as an 
 OSGi service (or any other component framework) is managed by the framework, 
 not by clients. Thus I suggest that the MicroKernel.dispose() method is 
 removed.
 The only piece of code that's notably affected by this change is the 
 MicroKernelFactory class still in oak-core and any client code that uses it 
 to construct new MicroKernel instances. I think we should replace the MKF 
 class with a more generic solution as outlined in OAK-17.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-107) testConcurrentMergeGC fails intermittently

2012-05-23 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-107:
--

Component/s: mk

 testConcurrentMergeGC fails intermittently
 --

 Key: OAK-107
 URL: https://issues.apache.org/jira/browse/OAK-107
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.2.1
Reporter: Dominique Pfister
 Attachments: stacktrace.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (OAK-83) Copy operation would recurse indefinitely if memory permitted

2012-05-04 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-83?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved OAK-83.
--

   Resolution: Fixed
Fix Version/s: 0.3

fixed in svn r1333899

 Copy operation would recurse indefinitely if memory permitted
 -

 Key: OAK-83
 URL: https://issues.apache.org/jira/browse/OAK-83
 Project: Jackrabbit Oak
  Issue Type: Bug
  Components: mk
Affects Versions: 0.1
Reporter: Michael Dürig
Assignee: Stefan Guggisberg
 Fix For: 0.3


 {code}
 microKernel.commit(, *\/a\:\/a/b\, null, null);
 {code}
 causes
 {code}
 java.lang.OutOfMemoryError: Java heap space
   at java.util.Arrays.copyOfRange(Arrays.java:3209)
   at java.lang.String.init(String.java:215)
   at java.lang.StringBuilder.toString(StringBuilder.java:430)
   at 
 org.apache.jackrabbit.oak.commons.PathUtils.concat(PathUtils.java:320)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
   at 
 org.apache.jackrabbit.mk.model.CommitBuilder.copyStagedNodes(CommitBuilder.java:293)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-04 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated OAK-75:
-

Attachment: OAK-83.patch

proposed MicroKernel API change/clarification

 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (OAK-75) specify format and semantics of 'filter' parameter in MicroKernel API

2012-05-04 Thread Stefan Guggisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/OAK-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13268385#comment-13268385
 ] 

Stefan Guggisberg commented on OAK-75:
--

 Would that also be the case if we only add the (path-) filter at 
 getRevisionHistory, and keep waitForCommit and getHeadRevision as they are 
 now?

i've attached a new proposal (see OAK-83.patch): 

- changed the 'filter' parameter on getJournal and diff to 'path'
- added a 'path' parameter to getRevisionHistory
- getNodes is the only method providing a 'filter' parameter

open questions: 

- globbing syntax for filter needs to be specified. '*' is legal name 
character...
- should we allow for node name filtering as well?
- if yes, do we need to specify separate filters for properties and node names? 
  
- the implicit default filter needs to specified (e.g. { incl: [ *, 
:childNodeCount ], excl: [ :hash ] }

 specify format and semantics of 'filter' parameter in MicroKernel API
 -

 Key: OAK-75
 URL: https://issues.apache.org/jira/browse/OAK-75
 Project: Jackrabbit Oak
  Issue Type: New Feature
  Components: mk
Reporter: Stefan Guggisberg
 Attachments: OAK-83.patch


 the following MicroKernel methods contain a 'filter' string parameter:
 - getJournal
 - diff
 - getNodes
 through the filter an API client could e.g. specify:
 - special 'meta' properties to be included (e.g. :hash)
 - glob patterns on the names of properties/child nodes to be included/excluded
 - path filter (for getJournal and diff)
 format/detailed semantics TBD, here's an initial proposal (json):
 {code} 
 {
   path : /some/path,
   incl : [ :hash, * ],
   excl : [ tmp* ]
 }
 {code} 
 name filter patterns should ideally be the same 
 format as specified for JCR Node.getNodes/getProperties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   8   9   >