Re: [VOTE] Codename for the jr3 implementation effort (Was: [jr3] Codename)
[ ] Blackrabbit [x] Oak The oak is a common symbol of strength and endurance ... - wikipedia regards, david
Re: JCR 2.0 draft html version
hi guys, i tried to update the document sizes to be more meaningful... http://www.day.com/specs/jcr/2.0/ (possibly hit reload in your browser in case you may still have stuff in your cache) feedback still very welcome ;) regards, david On Mon, Sep 28, 2009 at 3:13 PM, Bertrand Delacretaz bdelacre...@apache.org wrote: Hi David, On Mon, Sep 28, 2009 at 2:59 PM, David Nuescheler da...@day.com wrote: please find a draft of the JSR 283 html version online. http://www.day.com/specs/jcr/2.0/ Cool ...After looking into the split-up I am tempted to created fewer bigger documents instead Yes...pages like http://www.day.com/specs/jcr/2.0/3.6.1.5_DOUBLE.html makes me thing that a single big HTML document would be fine. -Bertrand -- David Nuescheler Chief Technology Officer mailto: david.nuesche...@day.com web: http://www.day.com/ http://dev.day.com twitter: @daysoftware
JCR 2.0 draft html version
Hi all, please find a draft of the JSR 283 html version online. http://www.day.com/specs/jcr/2.0/ this should mostly facilitate referring to parts of the specification. This is a draft that was generated mostly automatically so I expect a lot of clean-up work, please let me know if you run into pages that look weird, feel free to just send me urls of broken pages. After looking into the split-up I am tempted to created fewer bigger documents instead. Feedback very welcome. regards, david
JSR-283 passes the Final Approval Ballot
Dear Jackrabbit Community, I am very pleased to announce that jsr-283 has been approved by the executive committee of the jcp and is bound to become JCR 2.0 as a final standard. I would like to thank everybody for all their efforts in putting the RI TCK together and send in all the great feedback to the Expert Group. Please find below some interesting numbers on JSR-283 ;) --- 734 issues 1456 days 72 individual members of the expert group 43 organizations represented 3 ballots of the executive committee of the JCP 41 yes votes 0 abstentions 0 no votes 5 face-to-face meetings 76 telephone conferences 277 pages of spec 87 interfaces and classes 522 fields and methods 1895 testcases in the official TCK 100% signature coverage of the TCK -- 2.0 Content Repository for Java Technology API == http://jcp.org/en/jsr/detail?id=283 http://jcp.org/en/jsr/results?id=4979 -- David Nuescheler Chief Technology Officer mailto: david.nuesche...@day.com web: http://www.day.com/ http://dev.day.com twitter: @daysoftware
Thanks Issue-Report from PlugFest in Basel.
Dear TC members Jackrabbit-devs, I would like to thank everybody who attended the CMIS PlugFest in Basel. I think it was very successful and we uncovered a lot of issues while having a lot of fun achieving 31 (!) client / server connections. http://liip.to/cmismatrix I think we should be able to use the above matrix to track ongoing CMIS introp testing. I am sure this can be an evolving base for everybody to contribute their test results to. Also find write-ups about the PlugFest here: http://dev.day.com/microsling/content/blogs/main/cmisplugfest2.html I also reported the Issues that were logged [1] throughout the PlugFest as issue 161 - issue 170 in the CMIS jira. regards, david [1] http://www.day.com/o.file/cmis-issues.jpg?get=f2f7b2e3176fc1deb1d610ac0ad06ec9 -- http://dev.day.com
[Chemistry] URL Layout
Hi all, currently the URL layout of chemistry is exposed in a fashion that prefixes the operation (eg. /children/folder-id) and by coincidence also permits the use of slashes in the folder document ids. I would like to propose a more natural mapping that also reflects a nicer mapping for repositories that choose to expose a hierarchy and allows for web infrastructure that make use of the hierarchical nature of the the URL to apply things like access control. my recommendation would be to go with something that would use the folder document id as the path and then identify the operation with a suffix. (eg. /folder-id.children.xml) so this would mean that instead of ... /children/my/deep/folder/id /children/9c56d0fcf93175d70e1c9b9d188167cf /type/my/deep/folder/id /type/9c56d0fcf93175d70e1c9b9d188167cf ... the URLs would be ... /my/deep/folder.children.xml /my/deep/folder.type.xml /9c56d0fcf93175d70e1c9b9d188167cf.children.xml /9c56d0fcf93175d70e1c9b9d188167cf.type.xml WDYT? regards, david -- Visit: http://dev.day.com/
Re: Next steps for Chemistry
hi serge, Is this code already accessible somewhere ? I'd love to have a look at it before I come on Wednesday. Even a snapshot of your working version would be fine :) i think our focus is on getting as much as possible done for wednesday, so we will check stuff in whenever it makes sense... Is the RMI deployment a requirement of your implementation, or just an option ? it is an option. i would even go as far as saying that it is an undesirable option ;) regards, david
Updated list of attendees for CMIS PlugFest in Basel 2930-April-2009
Dear TC Members Jackrabbit developers, For the upcoming CMIS PlugFest I received tentative confirmations of attendance from the following people. In case I should have missed anyone who confirmed their attendance, please let me know. Al Brown / IBM Berry van Halderen / Hippo Boris Kraft / magnolia Cedric Huesler / Day Dave Caruana / Alfresco David Nuescheler / Day Dominique Pfister / Day Florent Guillaume / Nuxeo Florian Mueller / OpenText Jens Huebel / OpenText Martin Hermes / SAP Michael Marth / Day Paul Goetz / SAP Sameer Charles / magnolia Serge Huber / Jahia Ugo Cei / SourceSense Volker John / Saperion I think there are only very few people on this list who have not been at the Day HQ in Basel yet, so if you are not convinced that you will be able to find the address, please let me know and I will privately send you my cell phone # or call the main reception at +41612269898 . We will start around 9am with coffee, an introduction and setting up the various repositories and CMIS clients to make sure that we have everything we need from a network perspective. regards, david
Re: Incubating Chemistry
Hi guys, i think we definitely have to distinguish between the JCR bindings of Chemistry and a specific repository implementations bindings. The JCR bindings of Chemistry make up a big part of my interest in Chemistry as a project, so I would really be interested in keeping those around in Chemistry to help bring people on board from the JCR community, especially since we really want to grow the community. regards, david On Sat, Apr 25, 2009 at 2:18 PM, Florent Guillaume f...@nuxeo.com wrote: On 24 Apr 2009, at 07:51, Felix Meschberger wrote: Jukka Zitting schrieb: On Thu, Apr 23, 2009 at 5:07 PM, Florent Guillaume f...@nuxeo.com wrote: Also I had envisionned the Jackrabbit-specific backend living in the Jackrabbit project, to simplify inter-projects dependency management (as Chemistry is an infrastructure project). Sure, makes perfect sense. The way I see it, Chemistry would contain a generic CMIS-to-JCR bridge and Jackrabbit would use that bridge to CMIS-enable the repository. I updated the proposal accordingly. That the Jackrabbit backend resides in Jackrabbit makes perfect sense to me. How about Chemistry starting off with a first implementation of that backend, which could then be migrated to the Jackrabbit project once the interfaces and implementation have stabilized a bit ? Yes that would probably make things simpler, refactoring-wise. Florent -- Florent Guillaume, Head of RD, Nuxeo Open Source, Java EE based, Enterprise Content Management (ECM) http://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87 -- Visit: http://dev.day.com/
Re: Chemistry / CMIS status
On Wednesday, April 8, 2009, Florent Guillaume f...@nuxeo.com wrote: On 7 Apr 2009, at 14:43, David Nuescheler wrote: Hi All, Given the fact that we are now looking into applying florents contributions, I would like to propose that we rename the jcr-cmis folder in sandbox to chemistry. Thoughts? regards, david I'd suggest just creating a chemistry folder in the sandbox and keeping the jcr-cmis one until all the functionality in jcr-cmis has been subsumed by Chemistry. Sounds like am great plan to me. Regards, David -- Visit: http://dev.day.com/
Re: Chemistry / CMIS status
Hi All, Given the fact that we are now looking into applying florents contributions, I would like to propose that we rename the jcr-cmis folder in sandbox to chemistry. Thoughts? regards, david On Tue, Apr 7, 2009 at 10:23 AM, Dominique Pfister dominique.pfis...@day.com wrote: Hi Florent, I was very busy lately, but now I finally found some time to look at your sources. Kind regards Dominique On Wed, Mar 11, 2009 at 1:58 AM, Florent Guillaume f...@nuxeo.com wrote: Hi all, Here's a status report on the progress of the Chemistry code. The API has been tweaked a bit to separate a programmer-usable API from a lower-level SPI that mirrors the CMIS spec. There is now an AtomPub server implementation (work in progress) that allows some read-only operations (for now): getting repo info, type info, children listing and document retrieval). You can check the latest sources by downloading http://hg.nuxeo.org/sandbox/chemistry/archive/tip.zip This is a maven-buildable project. The project includes a sample servlet so that folks can test their CMIS AtomPub clients against a simple in-memory implementation of the API (which is still very incomplete, but progressing well). Using: mvn -Dmaven.test.skip=true package you'll get a self contained JAR which you can run with: java -jar chemistry-tests/target/chemistry-tests-0.1-SNAPSHOT-jar-with-dependencies.jar Just hit ^C to stop the server. As it's now at a stage where I'd really like more people eyeballing it and contributing, I plan on submitting it tomorrow so that it could be checked in by a committer in the sanbox -- if that's ok with you. Thanks, Florent -- Florent Guillaume, Head of RD, Nuxeo Open Source, Java EE based, Enterprise Content Management (ECM) http://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87 -- Visit: http://dev.day.com/
Re: JSR-283 Proposed Final Draft posted, waiting for download fix.
Good news ;) I am happy announce on behalf of the JSR-283 Expert Group that JSR-283 is out for Proposed Final Draft review. http://jcp.org/aboutJava/communityprocess/pfd/jsr283/index.html There are a lot new and exciting features in the specification and changed the structure of the specification to make a big section Java Language since a lot of you ported JCR outside of the Java Language. For a little more information, visit: http://dev.day.com/microsling/content/blogs/main/jsr283proposedfinaldraft.html Feedback is very welcome jsr-283-comme...@jcp.org regards, david On Wed, Apr 1, 2009 at 8:44 PM, David Nuescheler da...@day.com wrote: Dear Jackrabbit-Devs Sling-Devs, as you may have seen JSR-283 has been posted for proposed final draft. http://jcp.org/aboutJava/communityprocess/pfd/jsr283/index.html Unfortunately, there seems to be an error in the posting since the download link on the page results in a Not Found Error. I alerted the PMO of the JCP at Sun this morning CET. The PMO (Project Management Office) is administrating all JSRs and is responsible for posting the proposed final draft. I will keep you posted on the progress. regards, david -- Visit: http://dev.day.com/
JSR-283 Proposed Final Draft posted, waiting for download fix.
Dear Jackrabbit-Devs Sling-Devs, as you may have seen JSR-283 has been posted for proposed final draft. http://jcp.org/aboutJava/communityprocess/pfd/jsr283/index.html Unfortunately, there seems to be an error in the posting since the download link on the page results in a Not Found Error. I alerted the PMO of the JCP at Sun this morning CET. The PMO (Project Management Office) is administrating all JSRs and is responsible for posting the proposed final draft. I will keep you posted on the progress. regards, david
Addendum to CCLA of Day Management
Hi Sam, I would like to extend the list of employees covered in the CCLA of Day Management to the following: The Schedule A (list of covered employees) addendum Tobias Bocanegra tri...@apache.org Bertrand Delacretaz bdelacre...@apache.org Michael Duerig mdue...@apache.org Roy T. Fielding field...@apache.org Stefan Guggisbergste...@apache.org Alex Klimetschek alex...@apache.org Philipp Koch pk...@apache.org Felix Meschbergerfmesc...@apache.org Thomas Mueller thom...@apache.org David Nuescheler unc...@apache.org Dominique Pfisterdpfis...@apache.org Peeter Piegaze ppieg...@apache.org Marcel Reutegger mreut...@apache.org Angela Schreiber ang...@apache.org Carsten Ziegeler cziege...@apache.org Jukka Zittingju...@apache.org I also faxed this list to (410)-803-2258. please let me know if there is anything else you need. regards, david -- David Nuescheler Chief Technology Officer Day Software AG Barfuesserplatz 6 / Postfach 4001 Basel Switzerland T 41 61 226 98 98 F 41 61 226 98 97
[cmis] initial architecture draft documentation on wiki
hi guys, to facilitate the conversation around the cmis sandbox architecture and goals i quickly put together a little bit of documentation from my perspective and dumped it on the wiki as a starting point. http://wiki.apache.org/jackrabbit/SandboxCMIS ... any sort of modification would be very welcome. regards, david -- Visit: http://dev.day.com/
[cmis] api and general structure, client implementation
hi all, i think we are making great progress on the server side of the cmis implementation and it is great to see both the ws and the atompub binding progress so quickly. since i think it should also be a goal of this implementation to make our code as re-uable as possible, it is great that it does not have any specific proprietary jackrabbit dependencies but really just jcr dependencies. on top of that i think it would also be great to expose the entire cmis model as a separate api. i think dominique already submitted an issue around specifying an api, so both the ws and the atompub binding can use that. https://issues.apache.org/jira/browse/JCRCMIS-7 i would like to take that a step further and also propose that we have a cmis client. i think this is something that is really important and help everybody developing something around cmis a great deal. so. i would propose that we have an svn subproject structure that is something like this. jcr-cmis/ server/ client/ api/ does that make sense? regards, david -- Visit: http://dev.day.com/
simple search in GQL: (was: Re: [VOTE] Release Apache Jackrabbit 1.5.0)
hi torgeir, just the term that you are looking for will search all properties. just like in google. if you want to limit the query to a certain property you prepend its name separated by a colon. some examples. foo // - will find foo in all properties title:foo // - will find foo in a title properties title:foo bar // - will find all nodes that contain bar and have foo in its title author:torgeir// - will find all nodes that have a property author that contains torgeir i am sure that if you would like to put some examples on the wiki that would be highly appreciated. regards, david On Fri, Dec 5, 2008 at 12:57 PM, Torgeir Veimo [EMAIL PROTECTED] wrote: On 5 Dec 2008, at 21:37, Marcel Reutegger wrote: Torgeir Veimo wrote: On 2 Dec 2008, at 23:45, Jukka Zitting wrote: * Simple Google-style query language. The new GQL query syntax makes it very easy to express simple full text queries. How do I do a full text search, ie searching for something in _all_ attributes, with this new syntax? Or is this not possible atm? you simply type in a term. GQL will translate that into a jcr:contains() on the context nodes. though, I'm not sure if that's what you want... You mean content nodes? I was hoping that there was a way to search any attribute. I can of course do some hacks where as I concatenate the property values of all the attributes and put the string into a 'catchall' property, but i think such a search would be better handled in the indexing system. -- Torgeir Veimo [EMAIL PROTECTED] -- Visit: http://dev.day.com/
Re: jcr-cmis sandbox
Also, I don't think we should implement any of the HTTP extensions in the AtomPub binding -- they are neither necessary nor desirable. We should show the TC how to implement it right, not just implement whatever they suggest. very good point! this also puts us into a good position to file issues for the CMIS jira ;) regards, david
Re: jcr-cmis sandbox
Hi Jukka, Most of the organizations on the technical committee of CMIS are already heavily involved at Apache either as contributors or as sponsors and are also on the JCR expert group. If there are existing Apache committers from other projects who'd be interested in working on this, then we could simplify things by opening write access in the Jackrabbit sandbox to all Apache committers. looking at this list I already see Paolo as an outside apache committer. so i think this would be a great idea. regards, david
Re: jcr-cmis sandbox
hi julian, thanks for your comments. ... Since functionally the CMIS specification is a subset of the JCR specification it allows a very simple and straight-forward mapping to a fully compliant JCR repository such as Jackrabbit. ... Yes, the more challenging part is the mapping *from* a JCR repository (how to deal with the information loss). yeah, that's true. it seems that the ideal cmis client (in our case) would be an spi client which then in term would expose jcr again. Defining a mapping will be useful, because it could be re-used to define the relation of CMIS and WebDAV. I think the technically most interesting approach would be to enhance WebDAV to carry the information it currently doesn't have (such as node type information), and then to build CMIS as an extension *into* the Jackrabbit WebDAV layer. absolutely. i think this would be particularly interesting given the fact that a lot of the functionality is already defined in webdav. i wonder if it is possibly to steer the tc into that direction. Another thing the CMIS TC should look into are the various proposals for including support for hierarchies into AtomPub (see, for example, http://www.oracle.com/technology/tech/feeds/spec/draft-divilly-atompub-hierarchy-00.html). It seems to me that this problem is generic enough, and the solution should not be specific to CMIS. good point. do you want me to post this an issue for the tc? ...or do you want to post it yourself. regards, david
jcr-cmis sandbox
Hi all, I am currently working in a technical committee on OASIS defining a document management interoperability specification called CMIS [1]. CMIS shoots for a protocol level interoperability between applications and various repository vendors. The specification is in a very early stage and a lot of things need to be addressed [2], but it has peeked the interest of a number of people at Apache already. Since functionally the CMIS specification is a subset of the JCR specification it allows a very simple and straight-forward mapping to a fully compliant JCR repository such as Jackrabbit. Similar to the existing protocol layers (webdav etc) on top of JCR that are already part of Jackrabbit, I would like to propose that we initiate first tests with an implementation in a sandbox project. I think that there are going to be a lot of benefits from such an implementation. First it will allow any JCR implementation to be CMIS compliant automatically (once the specification is released ;) ) and allow us to find the issues to be fixed in the specification itself and drive it into a good direction. But most importantly it will provide an platform for interested parties to collaborate in the open on an implementation. Most of the organizations on the technical committee of CMIS are already heavily involved at Apache either as contributors or as sponsors and are also on the JCR expert group. Let me know what you think. regards, david [1] http://lists.oasis-open.org/archives/tc-announce/200810/msg3.html [2] http://roy.gbiv.com/untangled/2008/no-rest-in-cmis
[jira] Commented: (JCR-1837) Mac OS X 10.5 (leopard) transfer trouble
[ https://issues.apache.org/jira/browse/JCR-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12642918#action_12642918 ] David Nuescheler commented on JCR-1837: --- i just checked with our servlet engine called CQSE, which is the one included in CRX quickstart. as mentioned before it works without a problem, here is the dump... C-68-#00 - [PUT /content/dam/STROMBERG_2_2.mp4 HTTP/1.1 ] C-68-#45 - [User-Agent: WebDAVFS/1.7 (01708000) Darwin/9.5.1 (i386) ] C-68-#000102 - [Accept: */* ] C-68-#000115 - [X-Expected-Entity-Length: 302108459 ] C-68-#000152 - [If: (3ef131eb-eb74-47a4-ae97-a72d94ee6ac9-1) ] C-68-#000200 - [Authorization: Basic YWRtaW46YWRtaW4= ] C-68-#000239 - [Connection: close ] C-68-#000258 - [Host: localhost:1234 ] C-68-#000280 - [Transfer-Encoding: Chunked ] C-68-#000308 - [ ] C-68-#000310 - [800 ] Mac OS X 10.5 (leopard) transfer trouble Key: JCR-1837 URL: https://issues.apache.org/jira/browse/JCR-1837 Project: Jackrabbit Issue Type: Bug Components: jackrabbit-webdav Environment: Mac OS X 10.5 WebDAVFS Jackrabbit 1.4.5 Reporter: Cédric Chantepie Fix For: 1.4.1 When trying to upload (put) file, at least of 10Mb, Mac OS X 10.5 Finder (included WebDAVFS client part) fails transfert with error code -36, eventually leaving lock causing more trouble to other Mac OS X client (even if tiger), and eventually crashing WebDAV mount. It can be related to Apple change about WebDAV PUT, using Transfert-Encoding: Chunk from 10.5 release (http://discussions.apple.com/message.jspa?messageID=7282319#7282319). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (JCR-1837) Mac OS X 10.5 (leopard) transfer trouble
[ https://issues.apache.org/jira/browse/JCR-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12642932#action_12642932 ] David Nuescheler commented on JCR-1837: --- hi cedric, as you can see in the X-Expected-Entity-Length header, it is 302mb regards, david Mac OS X 10.5 (leopard) transfer trouble Key: JCR-1837 URL: https://issues.apache.org/jira/browse/JCR-1837 Project: Jackrabbit Issue Type: Bug Components: jackrabbit-webdav Environment: Mac OS X 10.5 WebDAVFS Jackrabbit 1.4.5 Reporter: Cédric Chantepie Fix For: 1.4.1 When trying to upload (put) file, at least of 10Mb, Mac OS X 10.5 Finder (included WebDAVFS client part) fails transfert with error code -36, eventually leaving lock causing more trouble to other Mac OS X client (even if tiger), and eventually crashing WebDAV mount. It can be related to Apple change about WebDAV PUT, using Transfert-Encoding: Chunk from 10.5 release (http://discussions.apple.com/message.jspa?messageID=7282319#7282319). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: Release 1.4.1 for Jackrabbit Jcr-Server
+1 regards, david
Introducing CMIS
Today a broad group of document management vendors lead by IBM, Microsoft and EMC announced their efforts of around a protocol specification for content management interoperability [1]. I would like to congratulate the group to all their efforts that has been put into this specification and we look very much forward to participate actively in the standardization process that hopefully will be kicked off soon. I am excited that the ECM/DM market has decided to start supporting a protocol specification, which was an often discussed gaping hole in the enterprise content management market[2][3]. Since the protocol functionally matches on a protocol level to a large subset of what JCR specifies on API level for Java it is great opportunity for Jackrabbit to expose the CMIS on top of JCR with very little effort. This would definitely allow Jackrabbit to be included in an integrated Enterprise/Document-centric setup without compromising on the flexibility and broader usecases and modelling capabilities supported by a fully compliant JCR implementation. On the other hand it would also allow all (non-jackrabbit) JCR repositories to become CMIS compliant instantly. I think since the specification is still in the early stages of development this may very well be subject to change and if I look back at the early (pre-release) versions of JCR there was a lot of change. regards, david [1] http://www.informationweek.com/blog/main/archives/2008/09/proposed_standa.html [2] http://www.infoq.com/articles/nuescheler-jcr-rest [3] http://dev.day.com/microsling/content/blogs/main/jcr-loves-atom.html
[jira] Updated: (JCR-1556) PersistenceManager API change breaks backward compatibility
[ https://issues.apache.org/jira/browse/JCR-1556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Nuescheler updated JCR-1556: -- Summary: PersistenceManager API change breaks backward compatibility (was: PersistenceManager API change breacks backward compatibility) PersistenceManager API change breaks backward compatibility --- Key: JCR-1556 URL: https://issues.apache.org/jira/browse/JCR-1556 Project: Jackrabbit Issue Type: Bug Components: jackrabbit-core Affects Versions: core 1.4.3 Reporter: Tobias Bocanegra Persistence Manager API change introduced in JCR-1428 breaks backward compatibility. although this is not a public visible API it renders 3rd party PMs invalid that do not extend from AbstractPersistenceManager. at least for the 1.4.3 patch release, we should not do this. suggest to revert the API change for the next 1.4.4 release, but leave the method on the abstract pm, and introduce it only for 1.5. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: Concurrent modifications
I indeed get the expected exception, but not for n1.getProp n2.getProp n1.setProp n1.save n2.setProp n2.save ...which I hoped for. i think it would be totally legitimate for a content repository to either throw or not... personally, i would not support a test case in the tck that test for either behavior. regards, david -- Visit: http://dev.day.com/ - Day JCR Cup 08 - Win a MacBook Pro
Re: Concurrent modifications
Yes, that is correct -- this is not about JCR compliance, but about what we expect *Jackrabbit* to do. ah i see... ok, my expectation would be that jackrabbit does not throw ;) regards, david -- Visit: http://dev.day.com/ - Day JCR Cup 08 - Win a MacBook Pro
[RT] Evolution of Persistence
[RT] Evolution of Persistence To be able to address some of the performance and scalability limitations that we run into in the past based on our growing experience I would like to propose that we kick off a discussion around an evolution of the persistence model of Jackrabbit. In various conversations on the topic of persistence I observed that horizontal, free scalability in a cluster for both reads and writes is a topic that we need to keep in mind. I think that because of that we should make the persistence layer aware of the hierarchy at some point in time to allow for much more fine grained locks on a journal basis. Also I think that we could look into ways how a cluster node can indicate or determine what updates in a cluster need to be dispatched to the cluster node. In order to allow scalability in terms of writes to the repository I think it is important that the cluster master does not need to actively write the payload of transactions but only orchestrate them. Of course I also think that based on the experience with with the current persistence model we need to make sure that we deliver a scalable solution for all aspects of the JCR api where it employs RangeIterators. This includes lists of childnodes, references and the likes. I would like to find out if we can take an iterative evolutionary approach to a more efficient and more scalable persistence. As next steps I would like to propose that we build an option that allows for an index of the cluster that allows us build a journal backed persistence manager using the current PM interface, which would essentially have a no-op for writes. In addition to that as a next step I would like to propose that we have the change log operating directly to the journal as well. I would call this journal centric persistence. I think this could give us a good indication on how much performance gain we can get out of making sure that information is only persisted once (ignoring the query index for now) and it should allow us to test a purely journal based persistence and then take it from there to evolve into a more mvcc based and more freely scalable architecture. regards, david
Re: JCR thesis
hi martin, i would agree that it makes sense to compare different technologies in terms of their featureset... i agree with andreas that comparing performance beyond one specific implementation may be complicated. generally performance is very much subject to configuration and relevant usecases (loadtests) it's therefore even hard to compare different implementations of the same technology. ...having said that i would certainly be interested in helping to come up with relevant usecases to benchmark jcr. regards, david 2008/3/31 Martin [EMAIL PROTECTED]: I've talked to my supervisor and comparison to other technologies with some benchmarks could be interesting. So, the question is: what would be the other technologies? Some candidates :-): RDBMS (e.g. Oracle Database, MySQL) - probably using some ORM (JPA, Hibernate)? WebDav local filesystem different versions of Jackrabbit different implementation of jsr-170/jsr-283 XML databases (well, I haven't worked with any XML db engine, so I don't know what would be the best choice - e.g. Apache Xindice?) ODBMS (as far as I remember, I've only worked with object databases in sense of using ORM on top of RDBM and I've heard something about Caché) I know I'm mixing different things together, but for now... A final list will be shorter, to be able to make deeper comparison. Since you have more experience with JCR, you are more than welcome to add/remove items or change their order. And this is also way how to include technology you would like to see in comparison... Thanks, Martin
Re: Same name siblings
Hi Jukka, thanks a lot for bringing up the point. Support for same name siblings is troublesome and currently the best practice is to avoid them if possible. In many cases the default response when we see people having problems with SNS is to tell them not to use the feature. I think one of the problems that I see is that people tend to use SNS for anonymous collections without a sort order. Which should be a very lightweight operation, and unfortunately choosing SNS intuitively for that is probably just the most unstable and heavyweight construct someone could choose in JCR. I have not seen a lot of real justified use cases for SNS. On the other hand I would really like to give people a means to work with large anonymous unordered collections. I think a feature that would address the I have a bag of objects and I just want to persist them without thinking of a 'name' for each usecase, would definitely be great. Maybe this could be the introduction of an addNode() without a childnode name. I think that's lame. We should either treat SNS as a first-class feature that we just haven't been able to make work yet, or explicitly deprecate it and plan to drop or at least seriously limit the feature as far as is permitted by the JCR standard. The current status where SNS is kind of supported but you should not use it! is IMHO not sustainable in the long run. WDYT, is SNS worth the effort, or should we consider dropping it? I don't think that it is problematic to have some features that are not optimized. For example in most databases DELETE operations are slow and people are referred to use TRUNCATE functions instead. I think people are using SNS for the wrong reasons. regards, david
Content Object Mapping - jcrom.org
hi all, Olafur Gauti Gudmundsson pointed me today to his effort called JCROM (pronounced Jack-rom). [1] I am excited about the refreshing, quick and simple annotation based approach [2] and would like to find out what everybody's thoughts are on possibly finding synergies with the ocm framework that we have in Jackrabbit. regards, david [1] http://jcrom.org [2] http://code.google.com/p/jcrom/wiki/TwoMinuteIntro
Re: Content Object Mapping - jcrom.org
hi alex, I don't want to sound as I don't appreciate this effort, but I would have thought that people looking into this direction would firstly consider the JPA annotations firstly and then introduce new/custom annotations for special cases (I think a parallel with Hibernate and its JPA support is worth it now). Sounds fine with me. Moreover, when speaking about mapping solutions I would be interesting in the level of customization they allow and keeping in mind some of the JCR storage restrictions how these are handled (the first example that comes into my mind is a parent-child relationship with 10k children). I am not sure if you are referring to the performance penalties in the current jackrabbit implementation with a large number of direct child nodes. I want to be clear though that JCR does not have any limitation or performance penalty with a large number of direct child nodes. I do believe that this initiatives are helpful for the JCR community, but I would encourage people to check how much is possible to be done in the JPA direction. I agree that using JPA annotations would help existing JPA based applications... I don't think it would be too complex to use (possibly a subset of) JPA annotations in addition. I am convinced that the JCROM project would be happy to receive a patch ...after all it is an opensource project ;) regards, david
Re: Status of proposed JCR 20. changes
Hi Michael, Which of the proposed changes have been accepted resp. will be implemented? http://wiki.apache.org/jackrabbit/Proposed_JCR_2.0_API_Changes Thanks Thomas extracted the changes for the jackrabbit wiki from the public review document, so these come directly from the expert group. So I think the consensus could be considered reasonably solid. Now given the fact that things may still change quite a bit until final release these cannot be considered final yet. The code for the reference implementation will be developed inside the Jackrabbit project again, so you will see these features implemented rather sooner than later. regards, david
[jira] Commented: (JCR-1212) JCR2SPI Node.hasProperty(String) optional property incompatibility with Jeceira
[ https://issues.apache.org/jira/browse/JCR-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12542409 ] David Nuescheler commented on JCR-1212: --- I would take the following position on this. I think that JCR2SPI and SPI2JCR should be as tolerant as possible with different JCR compliant implementations. JCR compliant being the keyword ;) I guess I would agree to make modifications in any SPI layer in case we find out that we are not lenient enough to allow compliant repositories. I would probably have the jeceira guys fix their behavior in case they clearly violate the JCR spec. David, do you have an indication on what JCR method exactly you are talking about, and if the jeceira behaviour according to your reading of the JCR v1.0 spec is correct? Does that make sense? JCR2SPI Node.hasProperty(String) optional property incompatibility with Jeceira - Key: JCR-1212 URL: https://issues.apache.org/jira/browse/JCR-1212 Project: Jackrabbit Issue Type: Improvement Components: SPI Affects Versions: 1.4 Environment: 1.4-SNAPSHOT Reporter: David Rauschenbach Priority: Minor Jeceira throws a PathNotFoundException when an SPI2JCR-wrapped Jeceira repository gets invoked with the SPI getPropertyInfo, specifying an optional property that does not exist for a given node instance. JCR2SPI only expects an ItemNotFoundException to be thrown in such a case, which prevents Node.hasProperty(String) from returning true/false, and instead results in a RepositoryException being thrown, which in effect is an interoperability issue. JCR2SPI compatibility with Jeceira-based repositories would be significantly improved if the code in NodeEntryImpl.java:loadPropertyEntry(PropertyId) caught not only ItemNotFoundException, but also PathNotFoundException, before returning null in both cases. Proposed change to NodeEntryImpl.java: private PropertyEntry loadPropertyEntry(PropertyId childId) throws RepositoryException { try { PropertyState state = factory.getItemStateFactory().createDeepPropertyState(childId, this); return (PropertyEntry) state.getHierarchyEntry(); } catch (ItemNotFoundException e) { return null; } catch (PathNotFoundException e) { -- new return null; -- new } } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: [VOTE] Approve the Sling project for incubation
+1 regards, david
Re: Jackrabbit, the database
hi all, i can appreciate both positions, looking at jackrabbit as the datastore or looking at jackrabbit as running on top of a datastore (rdbms). personally, i don't believe that the latter perception will go away for quite a while, so i think jackrabbit should support both views. in my experience, i have seen really three different views so far: A: i want to store the entire repository in a relational database. this allows me to use hot backup and clustering of the database. B: i want to store all the meta information in the database but i also have those really large blobs (movies) that i don't trust the database with. C: i want it to be fast reliable and easy to deploy and run in production. (does someone on the list not fall into these three options?) i think with the global datastore and the pm architecture jackrabbit is very flexible to offer options for all three views of the world. i think a good next step would be to explicitly support/document these three well defined persistence models and make it really easy for people to just pick and choose their favorite approach and run with an ootb config. regards, david .ps: of course i have a personal preference but i think my preference is well-known ;) On 8/20/07, Thomas Mueller [EMAIL PROTECTED] wrote: Hi, management won't. political reasons. won't move to Jackrabbit *if* Jackrabbit cannot store it in oracle. I agree. My guess is about 50% of larger organizations want a databases as the backend, even if databases are slower. So about 50% don't really care. Thomas
Re: JSR 283: EventCargo API suggestion
Hi Bertrand, thanks for your comment. I agree that it is sometimes hard to make sense on an application level of what operation triggered the creation of an event. One way of trying to make that more meaningful was the introduction of method events in JSR-283. I think that an EventCargo API makes for a very interesting option that we will most certainly consider. Thanks for your input. regards, david On 8/17/07, Bertrand Delacretaz [EMAIL PROTECTED] wrote: Hi, (ccing dev@jackrabbit.apache.org as we have discussed this there, see [1]) I recently implemented a JCR-based audit trail module, and making application-level sense of the Events that an EventListener receives required quite some efforts. In a real-life app with both human users and automated processes generating JCR data, an EventListener is bombarded with Events that sometimes make little sense at the application level, and sorting them out to create a meaningful audit trail can be tricky. I think the following additions to the JCR API would make this much easier: 1) A new Session.setEventCargo(EventCargo) method, causes the supplied EventCargo to be used for all Events generated by this Session from now on. 2) A new Event.getCargo() method, returns the EventCargo that was set in the Session when this Event was generated, or null if none set. 3) The EventCargo class provides set/getUserData() methods to attach any Serializable to it. Serializable is required to be able to store the EventCargo to a transaction/clustering journal, and the EventCargo implementation must be Serializable as well. The idea is to use setEventCargo to indicate what is currently happening at the application level (user is editing data, automatic process is generating additional nodes, etc) to make it easier to analyze the generated Events. Thanks for considering this suggestion - apart from my personal experiences, we've had a several related user requests on the Jackrabbit mailing lists, where this would help a lot. -- Bertrand Delacretaz -- [EMAIL PROTECTED] -- http://www.codeconsult.ch [1] http://preview.tinyurl.com/35rrg8
Re: [RT] JCR observation: adding cargo data to events?
hi bertrand, i agree that this would be interesting, and could get us to a certain extent out of the method events issue. if you dont mind and others feel like this would be a valuable addition you could eventually send this as a public review comment to [EMAIL PROTECTED] so i can include it in our digest. regards, david On 8/16/07, Felix Meschberger [EMAIL PROTECTED] wrote: Hi, This would probably really be worth it. I also think of somehow tagging the operations for example to provide more information in case of item removal, where very little is actually available in the event leading to guessing or having to keep caches. On the other hand, save operations may encompass a whole number of possible unrelated tasks, but this might be something for the user to handle. To come around the clustering issue, it might be defined, that the cargo should be serializable. Regards Felix Am Donnerstag, den 16.08.2007, 11:24 +0200 schrieb Bertrand Delacretaz: Hi, (this might be more a question for the JSR283 people, but I'd like to have this community's opinion) I recently implemented a JCR-based audit trail module, and making application-level sense of the Events that an EventListener receives required quite some efforts (and lazyness is a virtue, not? ;-) In a real-life app with both human users and automated processes generating JCR data, an EventListener is bombarded with Events that sometimes make little sense at the application level, and sorting them out to create a meaningful audit trail can be tricky. See also the recent How to figure out if there was a rename operation on Node thread on users@ (http://tinyurl.com/2qfbt3) for a similar problem. This event analysis would be much easier if the save operations could be enhanced with some cargo Object, that is opaque for JCR, but passed on to Events to give more info about what's happening at the application level. Here's my suggestion (which would need changes to the JCR spec): Maybe session.save() and other save() methods could take an optional Object parameter, that is made available in the observation Event with a new getCargo() method? This object can be used, for example, to indicate that the nodes being saved are autogenerated by some metadata extractor, to mark the Events as such in an audit trail, separating them from Events that indicate human user actions. I'm wondering if this might be a valid suggestion for JSR-283, what do people think? I haven't seriously evaluated the implications at the implementation level, this might be tricky to implement in clustered settings (although the cargo could probably be saved in the journal). -Bertrand, nostalgic about the cargo concept in Clipper code circa 1987 ;-)
Re: JCR 2.0 extensions
hi thomas, But since jsr283 should be mainly backwards compatible It's not. Some methods now return something that didn't before. as christoph mentioned i think those are in relatively isolated places so we would probably try to postpone any work that would require the api change (which in my mind is very little) to the latest point in time possible. In case we do ran up with compatibility issues, I think we have a good case to request a change in JSR 283. Maybe making the JSR 283 API backwards compatible is the best solution. that would be ideal. but since there are some obvious corrections needed, we would like to make those fixes as soon as possible. particularly, the example of adding a return value that you mention above does not break existing code, therefore i consider it relatively harmless and suitable for a 2.0 release. regards, david
The rationale behind the Abstract Query Model [was: Xpath deprecated] apache
Hi All, I would like to try to make the argument for the AQM and explain why it is not about reinventing the wheel. I personally hate long emails, so please let me apologize. If you are interested in the topic though, I can guarantee that this will certainly save you some time future discussions. In the JSR-283 Expert Group we have representation of just about every large content repository / content management vendor that is active in the java space. Most of these vendors have been in the content repository space for decades and have a very good understanding of what their existing customers requirements are. They have very large install bases and the infrastructure they provide should be considered a significant long-term investment. When it comes to the Query Facility in a content repository the members of the Expert Group came to the conclusion that there is a certain set of functionality that we can support across many actual real-life repository implementations. And this is what should be mandated by the specification. It important to understand, that this feature set is well negotiated. There is not a lot of wiggle room from a functionality standpoint due to the existing implementations. I believe that the members of the JSR-283 expert group are the right people to judge what is implementable in a reasonable timeframe in their respective content repositories. To illustrate the simplified query functionality landscape how I see it, I tried to explain it visually: http://www.day.com/o.file/aqm1.png?get=08c5075f4f07b12ae1a9269044658cc1 As mentioned above I think the question of how large the black circle should be, to still allow for a reasonable adoption of a standard, is something that is should not subject to this discussion here. I guess we would all agree that in a perfect world we would all love to expose the most feature rich query interface, which I agree would probably (as of today) be a full XQuery implementation. If we go back to the real world, we are still stuck with the problem from a specification perspective, how to describe the black circle in the most precise, clear and concise way. The specification needs to allow a repository vendor to know exactly what features they need to expose to comply with specification and also what a user can expect from a query perspective from a content repository. In JCR v1.0 (aka JSR-170) we decided to use a subtractive way of specifying the query feature set. I tried to visualize that in the following chart. http://www.day.com/o.file/aqm2.png?get=e0532f1c6f2e6ca93ed9bf713eb3b6fe So we specified XPath 2.0 as the basis and then tried to identify everything that cannot be mandated to a content repository based on real-world implementations. Which turned out to be a lot. On the other hand we found ourselves in the situation that features like full text search, full text syntax, ranking, ordering, projections and so on where not standardized by Xpath 2.0. So we ended up specifying a fuzzy subset plus sizable additions. Please keep in mind that defining the feature subset is not trivial since many repositories have for example limitations on path queries. Based on this experience we needed to look for something that was extensible for the future and would allow us describe in a much crisper way how the query facility works. To get a clear picture of what the content repository needs to be able to provide we defined the Abstract Query Model. I am convinced that the Abstract Query Model provides a very clear and concise description of the query facilities of a content repository and therefore personally am a big supporter of specifying it this way. Since JCR always intended, to leave it up to the repository user what query language they preferred, we already introduced in JCR v1.0 a mechanism that allowed to extend support for query languages on a repository basis. It is evident that there is no single query language that is the best suited for all types of queries for all use cases. In JSR-283 we enhanced this extensibility. Instead of just allowing the content repository to expose multiple query languages, we now even allow the developer to use different query language parsers that are content repository vendor independent. http://www.day.com/o.file/aqm3.png?get=97e829a34161d813fbd2fe3c94f9ec01 I am sure that there are a number of discussions regarding query, and I would like to make sure that we do not confuse some of the issues at hand. Please consider the following three thread topics as a new subject for your post, when commenting: --- (1) JCR should offer more powerful query facilities: I would like the content repository vendors to agree on broader functionality so it maps better to something like XQuery. [ this should be addressed or cc'd to [EMAIL PROTECTED] I would like to inform you though that we had this discussion pretty much for the last 6 years in the expert group and the current consensus is well negotiated. ]
Re: Question
Hi, Thanks for the additional Information. Here is what we are trying to accomplish. We have several sites {Public, Local, National, Press, etc...} and wanted to use separate workspaces for each one in order to segment the content. However, occasionally, content from one site must refer to content from another site. If we implemented this using workspaces, a node from one workspace will need to refer to a node from a different workspace. Sounds like a grouping to me... ;) I think the misconception (and misuse) of workspaces is one of the most common issues around Content Modeling in general. In the JCR Expert Group we found that we still have to explain the very abstract concept of Workspaces much more in details and give some best practices, guidelines and clear indications on when to use workspaces. I will try to give it a first shot here: Workspaces are useful in combination with versioning or cross-workspace operations (such as merge or update) that means if you have nodes with the same UUID in various workspaces it is likely that your usecase is valid. In case you have no nodes with the same UUID in different workspaces it is likely that you would be better served by folders (nodes in the hierarchy), since the workspace also is the boundary for references or query. There are a few usecases that desire this additional isolation, but one has to be clear of the implications. So to come back to your datamodel, I would recommend to do something like: /content/public, /content/national, /content/local, ... This would for example also allow you to search everything, which may or may not be a desired sideffect. Currently I can't see a drawback of modeling things into one workspace. On a completely different note I would like to mention that (since WebCM is our core business) I would also discourage the use of of (hard)-references for almost every usecase in WCM. In WCM I think that dealing with a dangling reference (in JCRv1.0 pretty much a string, containing a UUID) on an application layer is more desirable. The additional overhead and constraint of the referential integrity on a content layer is constricting... I would certainly be interested in continuing this discussion since I think that good content modeling guidelines or recommendations are something that we lack in JCR and the content space in general. regards, david
Re: Question
Hi, the JSR-170 spec explicitly states that references are within the same workspace only. generally, there are a number of issues with cross workspace references. to mention one, your session is tied to one workspace and the access to another workspace is not obvious without creating another session. also, since there can be multiple workspaces with the node with that given UUID it would not be trivial to identify the node just through the UUID. referential integrity is a whole different issue. In JSR-283 cross repository and cross workspace references are considered as a topic to resolve. In my experience I found that a lot of people that were interested in cross workspace references were not really looking for all the attributes of a JCR reference and either ended up putting all the data into one workspace (rightfully so, since the workspace metaphore is frequently abused as some grouping or even access control mechanism) or used for example UUID's as strings and resolved the reference in the application with a getByUUID() call on the target workspace. Maybe you can outline your usecase a little bit more in detail, I would be happy to see what suggestions I could come up with. regards, david .ps: I read the property in your example as a reference property, since there is no referenceable property. I assume that's correct, right? On 7/5/07, qcfireball [EMAIL PROTECTED] wrote: I spent about 2 hours looking for the appropriate forum to post this question. This one is the best I came up with. Please forgive me, but my question is not Jackrabbit specific. What I want to do is be able to refer from one node via a property to another node in a different workspace. For example: WorkspaceA root node Node 001 Property:REFERENCEABLE, value=??? can I refer to Node 00X, UUID 0123?? WorkspaceB root node Node 00X UUID = 0123 Is this possible in Jackrabbit? Is this possible in other implementations of the JSR-170? Is this something that is not defined by the JSR-170? I have read thru the Spec, and it does not seem to address this, so I am assuming that the JSR-170 does not address this. It talks about corresponding nodes (which I am not sure I understand yet), but this does not seem to address the question I am interested in. Thanks. mjkelleher [EMAIL PROTECTED] -- View this message in context: http://www.nabble.com/Question-tf4029888.html#a11446925 Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
Re: Jackrabbit Scalability / Performance
hi viraf, thanks for your mail. Has anyone built an application similar to that described above? What version of Jackrabbit was used, and what were the issues that you ran into. How much meta-data did a node carry, what was the average depth of a leaf node, and how many nodes did you have in the implementation before performance became an issue. we built a digital asset management application that sounds very similar to what you are describing. the meta information varies from filetype to filetype but ranges on average between 10 and 50 properties per nt:resource instance. in addition to typical meta information we also store a number of thumbnail images in the content repository for every asset. I am considering on building a cluster of servers providing repository services. Can the repository be clustered ? (a load balancer in front of the repository will distribute requests to a pool of repository servers.). yes, jackrabbit can be clustered. i would recommend though to run the repository with model 1 or model 2 [1] and just use the load balancer on top of your application. this avoids the overhead of remoting all together and still provides you with clustering. [1] http://jackrabbit.apache.org/doc/deploy.html How does the repository scale? can it handle 50Million artifacts (if the artifacts are placed on the file system does Alfresco manage the directory structure or are all files placed in a single directory) assuming that you mean jackrabbit... ;) we ran tests beyond 50m files and yes jackrabbit manages the filesystem if the filesystem is chosen as the persistence layer for blobs. Is there support for auditing access to documents ? this could easily be achieved with a decoration layer. Is there support for defining archival / retention policies? jackrabbit certainly offers the hooks to build recordsmanagment but does not come with ootb archival or retention facilties. Is there support for backups ? for the most convenient backup i would recommend to persist the entire content repository in an rdbms and use the rdbms features for backup. regards, david
[jira] Created: (JCR-890) concurrent read-only access to a session
concurrent read-only access to a session Key: JCR-890 URL: https://issues.apache.org/jira/browse/JCR-890 Project: Jackrabbit Issue Type: Improvement Components: core Reporter: David Nuescheler Assigned To: Stefan Guggisberg Even though the JCR specification does not make a statement about Sessions shared across a number of threads I think it would be great for many applications if we could state that sharing a read-only session is supported by Jackrabbit. On many occasions in the mailing lists we stated that there should not be an issue with sharing a read-only session, however I think it has never been thoroughly tested or even specified as a design goal. If we can come to an agreement that this is desirable I think it would be great to start including testcases to validate that behaviour and update the documentation respectively. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Re: NGP: Value records
hi jukka, i am very much in favor of such an approach. My idea is to store each value in a unique and immutable value record identified by a value identifier. Duplicate values are only stored once in a single value record. This saves space especially when storing multiple copies of large binary documents and allows value equality comparisons based on just the identifiers. this sounds great for large (binary and string) property values. A value record would essentially be an array of bytes as defined in Value.getStream(). In other words the integer value 123 and the string value 123 would both be stored in the same value record. More specific typing information would be indicated in the property record that refers to that value. For example an integer property and a string property could both point to the same value record, but have different property types that indicate the default interpretation of the value. i think that with small values we have to keep in mind that the key (value identifier) may be bigger than the actual value and of course the additional indirection also has a performance impact. do you think that we should consider a minimum size for value's to key stored in this manner? personally, i think that this might make sense. anyway, what key did you have in mind? i would assume some sort of a hash (md5) could be great or is this still more abstract? Name and path values are stored as strings using namespace prefixes from an internal namespace registry. Stability of such values is enforced by restricting this internal namespace registry to never remove or modify existing prefix mappings, only new namespace mappings can be added. sounds good, i assume that the internal namespace registry gets its initial prefix mappings from the public namespace registry? i think having the same prefixes could be beneficial since remappings and removals are very rare even in the public registry and this would allow us to optimize the more typical case even better. Achieving uniqueness of the value records requires a way to determine whether an instance of a given value already exists. Some indexing is needed to avoid having to traverse the entire set of existing value records for each new value being created. i agree and i think we have to make sure that the overhead of calculating the key (value identifier) is reasonable, so insert performance doesn't suffer too much. i could even see an asynchronous model that inlines values of all sizes initially and then leaves it up to some sort of garbage collection job to extract the large values and stores them as immutable value records... this could preserve insert performance and allows to benefit from efficient operations for things like copy, clone, etc and of course the space consumption benefits. so i guess in short i would be in favor of a value mechanism that can handle transparently both (a) inline the values without using extra indirection (for small values or quickly inserted one) and (b) immutable value records. just my two cents. regards, david
Re: Jackrabbit on DB2 ZOS v 8
Hi Vikas, maybe a contribution as a separate zOS DB2 Persistence Manager would be an idea. I could certainly see something like that as a contrib. Similar to the Oracle9BundlePersistenceManager idea from Jukka. regards, david On 4/18/07, Stefan Guggisberg [EMAIL PROTECTED] wrote: hi vikas, On 4/16/07, Vikas Bhatia [EMAIL PROTECTED] wrote: Jackrabbit persistence on DB2 ZOS v 8 is different than it is for most databases. We had to make certain changes to the jackrabbit core in order to get it working. Changes involved 3 java and 2 ddl and 1 repository.xml file in the paths listed: * jackrabbit-core/src/main/java/org/apache/jackrabbit/core/fs/db DatabaseFileSystem.java DB2FileSystem.java db2-zos.ddl * jackrabbit-core/src/main/java/org/apache/jackrabbit/core/persistence/db DatabasePersistenceManager.java db2-zos.ddl * repository.xml We would like to contribute these changes back to the community since they might help someone in the future. Ideally these changes should be reflected in the upcoming releases but DB2 ZOS has limitation on 8 character tablespace and database names. Thus we had to change the java files and hardcode a couple of values in the repository.xml. The default workspace had to be changed to def. Workspaces created using this code again have to take into account the 8 character limit. These changes might not be compatible with all databases. In that case we could just contribute this code into the contrib dir or put it up on the wiki. Please advice. first of all i'd like to thank you for sharing this information with the list. since using jackrabbit with db2 on z/OS is probably not a very common setup putting this information on the wiki would IMO be the best solution. cheers stefan Vikas.
Re: JCR and SOAP
Hi Sten, thanks for looking into that. I would like to express my support of any effort going into that direction. I think it would be great to have SOAP bindings for the SPI in Jackrabbit. In combination with JCR2SPI and SPI2JCR it makes for a very interesting remoting layer. I think that this could be interesting in particular for the .NET [1] and PHP [2] ports [1] http://svn.apache.org/viewvc/jackrabbit/trunk/contrib/jackrabbit-net/ [2] http://svn.apache.org/viewvc/jackrabbit/trunk/contrib/phpcr/ regards, david On 4/10/07, Sten Roger Sandvik [EMAIL PROTECTED] wrote: A little fast on the trigger there. Looked at the SPI for a brief moment and noticed that SPI2JCR project exists and that covers what I need to look at. Thanks again. /srs On 4/10/07, Sten Roger Sandvik [EMAIL PROTECTED] wrote: Thanks. I will look into the SPI package. The only thing is that it would be nice if the SOAP api could be used with any JSR-170 compliant servers on the backend. /srs On 4/10/07, Marcel Reutegger [EMAIL PROTECTED] wrote: Hi Sten, I'd recommend to create a SOAP API based on the Jackrabbit SPI effort because it is much easier to implement. In contrast to the JCR API the Jackrabbit SPI is less object oriented (e.g. there are no methods to navigate between items, etc.) and better suited for protocols. See: http://article.gmane.org/gmane.comp.apache.jackrabbit.devel/11221/ http://article.gmane.org/gmane.comp.apache.jackrabbit.user/2463/ If you are interested in building a SOAP API based on the Jackrabbit SPI interfaces you may want to have a look at the SPI-RMI implementation. It simply remotes the SPI interfaces using RMI. Using SOAP instead should be straight forward. regards marcel Sten Roger Sandvik wrote: Hi all! I have searched the archives in this mailing list and googled about JCR+SOAP and I am puzzled that it does not exist a SOAP api on top of JCR yet. I know that the WebDAV and friends is a platform neutral way of accessing the JCR api, but recently I have been in need of a SOAP api. Have anyone else need for such a api? If anyone has done something here or have ideas, please let me know. If nobody has done something here, I am considering to begin development of a generic SOAP mapping on top of JCR. The reason is that I need to access the JCR functionality via ActionScript (or Flex) without using Flex Data Services. Regards, Sten Roger Sandvik
better sql performance? [was: Re: poll on jcr query usage]
hi christoph, I'm mostly using XPath, but SQL gives me better performance if I only need a particular property of a node in the result set. can you elaborate on this? it would be great to have more background information on this... afaik, there should not be any difference in performance, and for example //element(*, nt:resource)/(@jcr:lastModified) should be equivalent to select jcr:lastModified from nt:resource regards, david
Google Summer of Code 2007 - Jackrabbit-JCR-Wikipedia
Hi all, after talking to Jukka and Stefan, I added the jackrabbit-jcr-wikipedia project for the Google Summer of Code 2007 to the wiki. http://wiki.apache.org/general/SummerOfCode2007#jackrabbit-jcr-wikipedia Generally, I think that this application is an ideal candidate for a demo application and has been mentioned as an explanatory example in the past. It allows to show off a lot of the JCR features in an application that everybody is familiar with. Since there is quite a bit of content that could be imported from Wikipedia I think this project would also demonstrate Jackrabbit's scalability in a very impressive fashion. regards, david
Re: Addendum to CCLA of Day Management
Hi Jim, thanks for the information. I faxed the addendum to the CCLA. please let me know if there is anything else that I need to do. regards, david On 2/27/07, Jim Jagielski [EMAIL PROTECTED] wrote: David, no problem. We just need You to fill out another CCLA and note changes and mark the CCLA as an addendum to the one currently on file. You can FAX it to us (410-803-2258) or Email a scanned copy of the signed-and-filled-out CCLA to [EMAIL PROTECTED] *and* [EMAIL PROTECTED] Thanks! On Feb 27, 2007, at 5:43 AM, David Nuescheler wrote: Hi Jim, I would like to make the following addendum to the CCLA of Day Management. Schedule B: Set of bundle persistence manager components for Apache Jackrabbit, attached to http://issues.apache.org/jira/browse/JCR-755 as JCR-755.patch.gz (MD5 checksum a8accf17e35d1dec52f5b4fcc277bb9e). I would also like to extend the list of covered employees to the following: The Schedule A (list of covered employees) addendum Tobias Bocanegra [EMAIL PROTECTED] Bertrand Delacretaz [EMAIL PROTECTED] Roy T. Fielding [EMAIL PROTECTED] Stefan Guggisberg[EMAIL PROTECTED] Felix Meschberger[EMAIL PROTECTED] Thomas Mueller [EMAIL PROTECTED] David Nuescheler [EMAIL PROTECTED] Dominique Pfister[EMAIL PROTECTED] Peeter Piegaze [EMAIL PROTECTED] Marcel Reutegger [EMAIL PROTECTED] Angela Schreiber [EMAIL PROTECTED] Carsten Ziegeler [EMAIL PROTECTED] Jukka Zitting[EMAIL PROTECTED] please let me know if there is anything else you need. regards, david --- David Nuescheler Chief Technology Officer Day Software AG Barfuesserplatz 6 / Postfach 4001 Basel Switzerland T 41 61 226 98 98 F 41 61 226 98 97
Addendum to CCLA of Day Management
Hi Jim, I would like to make the following addendum to the CCLA of Day Management. Schedule B: Set of bundle persistence manager components for Apache Jackrabbit, attached to http://issues.apache.org/jira/browse/JCR-755 as JCR-755.patch.gz (MD5 checksum a8accf17e35d1dec52f5b4fcc277bb9e). I would also like to extend the list of covered employees to the following: The Schedule A (list of covered employees) addendum Tobias Bocanegra [EMAIL PROTECTED] Bertrand Delacretaz [EMAIL PROTECTED] Roy T. Fielding [EMAIL PROTECTED] Stefan Guggisberg[EMAIL PROTECTED] Felix Meschberger[EMAIL PROTECTED] Thomas Mueller [EMAIL PROTECTED] David Nuescheler [EMAIL PROTECTED] Dominique Pfister[EMAIL PROTECTED] Peeter Piegaze [EMAIL PROTECTED] Marcel Reutegger [EMAIL PROTECTED] Angela Schreiber [EMAIL PROTECTED] Carsten Ziegeler [EMAIL PROTECTED] Jukka Zitting[EMAIL PROTECTED] please let me know if there is anything else you need. regards, david --- David Nuescheler Chief Technology Officer Day Software AG Barfuesserplatz 6 / Postfach 4001 Basel Switzerland T 41 61 226 98 98 F 41 61 226 98 97
Re: [VOTE] Include BundlePersistenceManager Contribution [JCR-755]
+1 regards, david
Re: Modifying checking in properties.
hi peter, i think you are right. i will file an issue in jsr-283 to make sure that we don't forget to fix this. i think it would basically boil down to something like: --- The node N and its connected subtree become read-only, with the exception of properties and child-nodes that are set to OPV=VERSION or OPV=IGNORE. --- is that correct? regards, david On 6/2/05, Peter Morton [EMAIL PROTECTED] wrote: Not really, i read this section of the spec and it does not really do what I would like it to do. I would like properties that have OnParentVersionAction.IGNORE set on them to be read-write within my workspace regardless of the check-in status. Peter. -Original Message- From: Angela Schreiber [mailto:[EMAIL PROTECTED] Sent: 02 June 2005 13:45 To: jackrabbit-dev@incubator.apache.org Subject: Re: Modifying checking in properties. hi you cannot set a property of a checked-in node. see jsr170 specification section 8.2.5 Check In: The node N and its connected non-versionable subtree become read-only. [...] Read-only status means that an item cannot be altered by the client using standard API methods (addNode, setProperty, etc.). The only exceptions to this rule are the restore , Node.merge and Node.update operations; [...]. and section 8.2.6 Check Out: In order to alter a versionable node (and its non-versionable subtree) the node must be checked-out. is that, what you were looking for? kind regards Peter Morton wrote: Hello, I would like to set the value of a property of a node that is checked-in, without having to check the node out. (ie so that it is only exists in the workspace and not under version control) I have set the property OnParentVersionAction.IGNORE for onParentVersion. but the code fails due to the following check: // verify that parent node is checked-out if (!parent.internalIsCheckedOut()) { throw new VersionException(cannot set the value of a property of a checked-in node + safeGetJCRPath()); } Should I be doing this a different way? Peter. __ __ This email (and any attachments) is private and confidential, and is intended solely for the addressee. If you have received this communication in error please remove it and inform us via telephone or email. Although we take all possible steps to ensure mail and attachments are free from malicious content, malware and viruses, we cannot accept any responsibility whatsoever for any changes to content outwith our administrative bounds. The views represented within this mail are solely the view of the author and do not reflect the views of Graham Technology as a whole. __ __ Graham Technology plc http://www.gtnet.com __ __ This email (and any attachments) is private and confidential, and is intended solely for the addressee. If you have received this communication in error please remove it and inform us via telephone or email. Although we take all possible steps to ensure mail and attachments are free from malicious content, malware and viruses, we cannot accept any responsibility whatsoever for any changes to content outwith our administrative bounds. The views represented within this mail are solely the view of the author and do not reflect the views of Graham Technology as a whole. Graham Technology plc http://www.gtnet.com
Re: Scalability concerns, Alfresco performance tests
Hi Andreas, sorry for my very delayed answer. In your answer on TheServerSide, you said that Scalability is mainly a matter of choosing and configuring the persistence layer correctly. Are there any scenario recommendations / best practises available? I'll check out the website again, but insider knowledge is as always greatly appreciated. Generally the configuration of Jackrabbit out-of-the-box running the DerbyPersistenceManager yields very acceptable results. Both in terms of performance and and also in terms of scalability. Basically having a row per items delegates the scalability to Derby on how many rows can be stored in a table. None of our tests have exhausted such a limitation. Backup/Restore operations in my experience usually happen on the persistence layer, which means that restore operation (obviously) does not go through the normal user API. How would a transactional replication be implemented (e.g. from an authoring system to a live system in a DMZ)? If a lot of documents are involved, for instance after an URL change which affects a lot of links, this could probably lead to such a massive transaction. Should this be implemented by accessing the persistence layer directly? IIUC this would have the drawback that the JCR implementation couldn't be replaced without changing the replication code ... I think I agree with you that Jackrabbit should be able to run very large transactions and I think that while there are workarounds for most applications (for example to split things into smaller transactions) those are not desirable. I think the best way forward is to make sure that this is fixed in Jackrabbit. I don't think that this is a design problem and people just have not requested this feature frequently enough for someone to care enough to fix it. Feel free to make it an issue and vote on it ;) Thanks for your input and again sorry for the delayed answer, regards, david
Re: Scalability concerns, Alfresco performance tests
Hi Andreas, Now, a news message [1] on TheServerSide about benchmarks provided by Alfresco to prove the superiority ermhh let's say state not prove ;) ...of their JCR implementation raises some concerns. I guess that this may exactly have been the intention ;) Also, the term JCR implementation may not be technically accurate, maybe someone could point me to an updated version of this: http://wiki.alfresco.com/w/index.php?title=JSR-170_Compliance A post in the thread claims that Jackrabbit isn't suited for large-scale scenarios and faces some problems in the transactional handling of some 100.000 nodes (Kev Smith, [2]): While Kev possibly has reasons to believe that, I don't. (Unless he talks about some 100k nodes a single transaction and a given memory size.) From what we've seen, Alfresco is comparable to JackRabbit for small case scenarios - but Alfresco is much more scalable [...] Do you agree to this statement? If yes - are these problems related to the persistence manager abstraction? Is this a known issue, and will it be addressed? I do not even remotely agree with this statement. Jackrabbit has been built to scale freely in size. I have a hard time understanding this argument since both Jackrabbit and Alfresco can use the same RDBMS as the persistence layer, so at least on the persistence layer there should not be a substantial difference. Thoughts? We tried to load up JackRabbit with millions of nodes but always ran into blocker issues after about 2 million or so objects. Also when loading up JackRabbit, the load needed to be carefully performed in small chunks e.g. trying to load in 100,000 nodes at a time would cause PermGenSpace errors (even with a HUGE permgenspace!) and potentially place the repo into a non-recoverable state. I'm not sure if this will really be an issue for our usage scenario (except maybe from restoring backups), but I'm very interested in your opinions. That's true, the size of the non-binary portions of a commit are currently memory constrained. Backup/Restore operations in my experience usually happen on the persistence layer, which means that restore operation (obviously) does not go through the normal user API. I actually would go as far as stating that it would be close to abuse of the API to go through the transient layer to restore an entire content repository. We are currently working on a solution for that, but since nobody had a pressing need, it had a relatively low priority. If this is a pressing issue for your project feel free to file a JIRA issue. regards, david
Re: [VOTE] Release Apache Jackrabbit 1.1.1
[X] +1 Release the packages as Apache Jackrabbit 1.1.1 regards, david
[jira] Commented: (JCR-644) Node.isNodeType() throws if namespace is not defined.
[ http://issues.apache.org/jira/browse/JCR-644?page=comments#action_12452387 ] David Nuescheler commented on JCR-644: -- I think that the would initially setup his namespaces to be mapped on a session basis to whatever makes sense for the application and then use literals throughout the application. basically i think that a client that does not map its namespaces on a per session basis is broken. From my perspective I would assume that once the namespaces are mapped, I would like to work in my application code using static/hardwired-prefixes only. I think that this makes the code much more readable. With respect to the isNodeType() I would then expect it to fail without an exeception specific to the unknown prefix much like a query operation that uses an unregistered nodetype or any other read operation for that matter. I guess from an application perspective I would even assume a getItem(/unknownprefix:bar) to throw a PathNotFoundException. I believe if the more explicit namespace exception was the intended by the spec, getItem() for example would explicitely throw a NamespaceException (or similar). Node.isNodeType() throws if namespace is not defined. - Key: JCR-644 URL: http://issues.apache.org/jira/browse/JCR-644 Project: Jackrabbit Issue Type: Bug Components: nodetype Affects Versions: 1.0, 1.0.1, 1.1, 0.9 Reporter: Tobias Bocanegra Assigned To: Tobias Bocanegra eg: node.isNodeType(foo:MyNodeType) throws an exception if 'foo' is not defined. this is incorrect since an application should not need to check if the namespace exists before checking for a nodetype. it should return false. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Alfresco + Jackrabbit
Hi Robert, Actually one thing that I find really interesting about Alfresco - in case anyone wants to implement it as an add-on to Jackrabbit - is the CIFS layer which supposedly allows good access to the server (as a document server) from Windows clients. I would imagine that using the jCIFS library it would be possible to write something similar for a more generic JSR-170 provider... I think this would be a good idea too, as a matter of fact we already looked into the feasibility of something like that and it seems to work just fine. Some random access performance drawbacks if we want to keep it strictly bound to the JCR API. jCIFS though is a CIFS client, right? At least I have not found a CIFS server other than Starlasoft's / Alfresco's? Am I looking in the wrong place? Having tried mapping a WeDAV location as a network drive I can say that it really doesn't work in a usable fashion. Really? So far I experienced a generally suboptimal perfomance but it works just as well as CIFS for me, both on MacOSX and Windows. What issues did you encounter? regards, david
Re: same-name-sibling compaction
Hi David, Personally, I think that there are going to be more volatile and less volatile paths in a content repository based on many different characteristics of the application(s) running on the repository. Generally SNS paths are less stable because they are not only impacted by moves of the node or an ancestor node but also by removals or additional of sibling nodes. I agree with you that, if one requires a stable identification of a node a path to an SNS node is a poor nodetype design that I would certainly not recommend. Alternatively I would recommend to use a referencable node or stay away from SNS. Personally, I would even go further and state that SNS should be used in exceptional cases only and with great caution, in general. I will gladly forward this post to [EMAIL PROTECTED] for discussion in the Expert Group, and would encourage anyone to post specification enhancement to the above address. regards, david On 9/26/06, David Kennedy [EMAIL PROTECTED] wrote: I'd like to bring up the issue of compaction of indices with regard to same-name-siblings again. Compaction pretty much renders same-name-siblings useless if you refer to nodes via path. In a muti-user environment, and any point one user can move a node, thus changing the path of any following sibling referenced by other sessions. If an application maintains proxy data by path reference, the node may exist, however with a completely different path. Either applications need to make all nodes referencable and manage via UUID or avoid SNS. Has the EG's direction with regard to SNS compaction changed in JSR-283? David
Re: Nuclear Fission, Splitting the core: The SPI Effect [was: Improving the accessibility of the Jackrabbit core]
Hi, However, I'm a bit concerned about the revolutionary approach of the SPI effort. Rather than refactoring the Jackrabbit core to better separate the session-local parts, the SPI comes up with a brand new interface contract. This is probably the best thing to do given the SPI goals, but it does leave the big question of how and when are we going to integrate it with Jackrabbit core unanswered. I think you are right. Just to be clear, I do not look at the architecture suggested or hinted by the SPI to be implemented in a the very near future in a clean way. My original intention of this thread really was to stimulate some discussions around a possible Jackrabbit 2.x architecture. As of now the easiest way I see to integrate the SPI effort with the Jackrabbit core is through a generic spi2jcr adapter, but that doesn't really affect the core design or increase code re-use. I agree. As a side note: I even see value for an spi2jcr adapter beyond the mid-term goal of a better remoting for the current Jackrabbit core. I think that a spi2jcr adaptor (in conjunction with the jcr2spi-client and the protocol bindings of the SPI) could serve as a general purpose remoting layer for any JCR compliant repository. Generally, I think we could also look at a phased approach, that allows us to test, evolve and mature the components individually. I think we could also do something like: (Step1) Isolate the session-local parts into a standalone client (JCR2SPI) (Step2) Build the SPI2JCR layer that exposes the current Jackrabbit impl to the SPI (Step3) Refactor the Jackrabbit core to natively implement the SPI Thoughts? What would a more deeply integrated spi2jackrabbit component look like, and how would we implement it in the core? I am not sure how that would look like and I guess that this would be subject to some investigations. It may well be that some portions would benefit from being refactored to work efficiently. And on the other hand, how can the SPI effort better reuse the experience built into the session-local parts of Jackrabbit core? For example looking at the SessionImpl implementations from both jcr2spi and the core, I see quite a lot of duplicate functionality. How does the SPI make sure that the lessons learned developing the core are included in the new codebase? I agree. I think the lessons learned should be transported through the Jackrabbit Community and its experience with JCR and Jackrabbit over the past years. Of course I would also prefer to re-use as much existing and well tested code as possible. But personally I think we should not make architectural sacrifices at this point. I believe that the overlap and the redundance of the code between the session-local parts and the core are rooted on the original compact and intertwined design. Do you think we would see the same overlap if we would basically have a straight-up SPI implementation (on the server-side) more or less from scratch and strictly separate the session-local parts? regards, david
Re: Nuclear Fission, Splitting the core: The SPI Effect [was: Improving the accessibility of the Jackrabbit core]
Hi Thomas, Thanks for your thoughtful comment. I don't understand why it needs to be stateless (about my understanding of stateless, see http://en.wikipedia.org/wiki/Stateless_server). As far as I see stateless means it's slower, and I really don't like slow ;-) Even HTTP is becoming more and more stateful to improve performance I guess. Maybe Roy could give his view about stateless versus stateful. I know there are some other advantages / disadvantages. Hmm... I am not sure if I would agree with the generally slower statement, but I completely agree that this touchy topic. I remember a somewhat lengthy verbal discussion revolving around that a similar topic. Some of the legacy repositories that may want to implement the SPI are stateful, which makes it less intuitive for them to implement a completely stateless SPI. I still have not found a completely satisfying solution for that, but somehow it would be great if a well-behaved client could issue something like login() and possibly logout() to indicate to the server that some heavy-weight resources can be disposed. I think I understand that something like that could possibly break the stateless contract, but it could solve a very practical need. I could envison something along the lines of passing something like a token (or a cookie to borrow an HTTP analogy) on the login() call which would be passed back to the server on subsequent calls to help identify the server-session. Of course the server should also be able to work without this token but from a performance perspective would be capable of optimizing the use of some of its resources. What do you think? regards, david
Re: getting the latest version of a checkedout node
Hi J, I think that your explanations point into the direction of multiple workspaces. If user A has a workspace A that user can make modifications in his workspace without user B in workspace B can see the changes. As soon as user A checks something into the version store user B can check it out and look at it using merge() or update(). Does that suit your use case? regards, david
Re: getting the latest version of a checkedout node
and ;) as marcel put it: as a quick guideline: if you don't know how to achieve something ask on the user list, if you think something is wrong and doesn't work as expected use the dev list. Thank you.
Re: Improving the accessibility of the Jackrabbit core
Hi All, Dave, thanks a lot for your input. . Screenshots or easily downloadable sample app which actually does something with custom node types. the base war download is good, but how far could you go with it. Most open source applications have a contacts application or a phone book, or something similar. something that has a face, like a jsp to view whats in the repository would be great . the wiki has not been updated regularly, either the information is old or not many people go to it . the deployment models - creating a complete tomcat dist, which has the various deployment options running right out of the box would be nice. . a java example to add node types, for example for a phone book, which CRUDs the node types would be nice . maybe a page, which lists the possibilities of applications that could be built with JR will be useful for newbies. I completely agree with you that all of the above are excellent measures that we should be looking at to ease the adoption of new content application developers. I think it is very important that people get things up and running very quickly and are equipped with very good user documentation. Personally, I think we have to separate the concerns though, I think Jukka's initial post was going into the direction of making the internals of the core more accessible to more developers. I think that there are a number of steps that we can take into that direction and I also think that for example the separation eventually provided by the SPI will bring some more architectural clarity. While I agree that we need to have a modular design where people can plug-in their extensions at certain defined interfaces and extension points, I would discourage the idea that every user needs to be able to submit patches to the core. In my mind the core should be very compact and very controlled since it has to be extremely stable and scalable, meaning that there is not really a need to have dozens of developers working on a more smallish core. regards, david
Re: Improving the accessibility of the Jackrabbit core
Hi Nico, Thanks for your mail. I will work on the documentation directly on the wiki (when I can start this task). I will ask a lot of questions *though*. Looking forward to it ;) One precision on the backup tool: it is working (and I am polishing the code that needs to fit in Core). And with my new JR understanding, I plan to start implementing a version 2 in my spare time having hotbackup. Excellent, thanks for all your efforts. I did not mean to imply that the backup tool was not working. If I should have said anything like that, I would like to apologize. regards, david
Re: Additional version metadata
Hi JavaJ, For each version of a node, I would like to attach additional metadata such as the uuid of the user who created the node, a user-friendly version number, user comments, etc. Has anyone tried to do something similar? I would recommend to populate the node with additional versioning specific properties before you check it in (maybe in a separate mixin describing your additional meta information). This should allow any structure of meta-data and also a very convenient query. regards, david
Re: Searching binary data
Hi Roy, first of all I think this is rather a post for the user list than the dev list of Jackrabbit. On 9/2/06, Roy Russo [EMAIL PROTECTED] wrote: Running in to a problem when searching jcr:data types... select * from portalcms:content where jcr:data like '%JBoss%'; I assume that in your example you want to issue a fulltext search looking for JBoss, correct? So your query should look like this: select * from nt:resource where contains(*,'JBoss') See page 296: 8.5.4.5 CONTAINS of the JCR v1.0 spec regards, david
Re: Graphics into Jackrabbit documentation
Just one little question about graphs into documentation. For example : http://jackrabbit.apache.org/images/arch/level-2.jpg Witch software have you use to make this picture ? a random 3d-shader for the blocks and then (i am almost ashamed to admit) mostly ms powerpoint ;)... all manual labour as toby pointed out. regards, david
Re: [VOTE] Release Apache Jackrabbit 1.0.1
+1 regards, david