More than one policy per node?

2011-01-05 Thread Seidel. Robert
Hi,

In which cases do I have more than one policy per node?

To define a privilege for a user and a node, I have to loop about two 
collections (applicablePolicies  getPolicies), which together contains 
normally only one object and add the privilege there. I browsed a little bit 
over the code of DefaultAccessManager and found only one applicable policy or 
none, if there is already one set. How can there be more than one policy?

And if there is only one applicable or defined - is there an easier way to 
access it, instead of looping over the two lists?

It's a little bit over modeled in my opinion.

Regards, Robert


Re: More than one policy per node?

2011-01-05 Thread Angela Schreiber

hi


In which cases do I have more than one policy per node?


with the *current* implementation in jr there is only one policy
per node. but e.g. JCR-2331 would most probably be implemented
by having an additional non-editable policy. similarly having
a specific implementation for READ-access restriction could be
thought of as additional policy that only takes effect under
certain circumstances. and last but not least another jcr
implementation could be built on a different ac model that doesn't
use accesscontrollists. i don't find it too difficult to think of
some other approach that would make heavy use of multiple policies
per node.

regards
angela


RE: Doubt with username and password

2011-01-05 Thread Javier Arias


Thanks for your early answer. I was using SimpleLoginModule because I
did not know these modules are not for productive environment (I have
been working few days with JackRabbit). I saw the file that you
suggested  but I do not understand something. Now I only can access with
user admin. Where do I indicate wich users are allowed and their
passwords?

Sorry but I am new using JackRabbit and I am confuse.

Thanks, regards.



AW: More than one policy per node?

2011-01-05 Thread Seidel. Robert
Hi

 i don't find it too difficult to think of
some other approach that would make heavy use of multiple policies
per node.

Indeed, I can think of such scenarios too, but every other jackrabbit user have 
to implement a helper method like the one below to get the one policy that 
counts - this method could be standard at the AccessControlManager without 
taking the possibility to use multiple policies per node:

AccessControlList getList(AccessControlManager acMgr,
String path) throws AccessDeniedException, 
RepositoryException {
for (AccessControlPolicyIterator it = 
acMgr.getApplicablePolicies(path); it
.hasNext();) {
AccessControlPolicy acp = it.nextAccessControlPolicy();
if (acp instanceof AccessControlList) {
return (AccessControlList) acp;
}
}
AccessControlPolicy[] acps = acMgr.getPolicies(path);
for (int i = 0; i  acps.length; i++) {
if (acps[i] instanceof AccessControlList) {
return (AccessControlList) acps[i];
}
}
throw new RepositoryException(No AccessControlList at  + 
path);
}

Regards, Robert

-Ursprüngliche Nachricht-
Von: Angela Schreiber [mailto:anch...@adobe.com] 
Gesendet: Mittwoch, 5. Januar 2011 10:49
An: users@jackrabbit.apache.org
Betreff: Re: More than one policy per node?

hi

 In which cases do I have more than one policy per node?

with the *current* implementation in jr there is only one policy
per node. but e.g. JCR-2331 would most probably be implemented
by having an additional non-editable policy. similarly having
a specific implementation for READ-access restriction could be
thought of as additional policy that only takes effect under
certain circumstances. and last but not least another jcr
implementation could be built on a different ac model that doesn't
use accesscontrollists. i don't find it too difficult to think of
some other approach that would make heavy use of multiple policies
per node.

regards
angela


Performance issue when removing high amount of nodes

2011-01-05 Thread docxa

Hi,

We have to store in our repository a high amount of data, using this kind of
tree:

Project1
|_Stream1
  |__Record1
  |__Record2
  ...
  |__Record12
...
|_Stream2
  |__Record1
  |__Record2
  ...
  |__Record12

etc.

It takes some time to add those records, which was expected, but it's even
more time-consuming to remove them. (sometimes even crashing the VM)
I understand it has to do with Jackrabbit putting it all in memory to check
for referential integrity violations.

While searching for answers on the mailing list I saw two ways of dealing
with this:
1- Deactivate referential integrity checking. I tried that, and it did not
seem to accelerate the process, so I may be doing it wrong. (And I guess
it's quite wrong to even do it)
2- Recursively removing nodes by packs.

I noticed than when using the second method, the more children a node have,
the more time it will take to remove some of them. So I guess it would be
best to try and split the records through multiple subtrees.

So I'd like to know if there is a better way of organizing my data in order
to improve the adding and removing operations. And if the deactivation of
referential integrity checking is really risky, and how I'm supposed to do
it? (I tried subclassing RepositoryImpl and using
setReferentialIntegrityChecking but it didn't seem to change anything)

Thank you for your help.

A. Mariette
DOCXA
-- 
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/Performance-issue-when-removing-high-amount-of-nodes-tp3175050p3175050.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


AW: Performance issue when removing high amount of nodes

2011-01-05 Thread Seidel. Robert
Hi,

Jackrabbit keeps all children of a node in memory, so you should not have more 
than 10.000 direct child nodes at a node. (See 
http://jackrabbit.510166.n4.nabble.com/Suggestions-for-node-hierarchy-td2966757.html)

The solution in your case would be to add another level like

Project1
|_Stream1
  |__Records1-1
 |_Record 1
 |_Record 2
  |__Records10001-2
  ...
...
|_Stream2
  |__Records1-1
 |_Record1
 |_Record2
  ...
  |__Records110001-12

Regards, Robert

-Ursprüngliche Nachricht-
Von: docxa [mailto:d...@docxa.com] 
Gesendet: Mittwoch, 5. Januar 2011 10:24
An: users@jackrabbit.apache.org
Betreff: Performance issue when removing high amount of nodes


Hi,

We have to store in our repository a high amount of data, using this kind of
tree:

Project1
|_Stream1
  |__Record1
  |__Record2
  ...
  |__Record12
...
|_Stream2
  |__Record1
  |__Record2
  ...
  |__Record12

etc.

It takes some time to add those records, which was expected, but it's even
more time-consuming to remove them. (sometimes even crashing the VM)
I understand it has to do with Jackrabbit putting it all in memory to check
for referential integrity violations.

While searching for answers on the mailing list I saw two ways of dealing
with this:
1- Deactivate referential integrity checking. I tried that, and it did not
seem to accelerate the process, so I may be doing it wrong. (And I guess
it's quite wrong to even do it)
2- Recursively removing nodes by packs.

I noticed than when using the second method, the more children a node have,
the more time it will take to remove some of them. So I guess it would be
best to try and split the records through multiple subtrees.

So I'd like to know if there is a better way of organizing my data in order
to improve the adding and removing operations. And if the deactivation of
referential integrity checking is really risky, and how I'm supposed to do
it? (I tried subclassing RepositoryImpl and using
setReferentialIntegrityChecking but it didn't seem to change anything)

Thank you for your help.

A. Mariette
DOCXA
-- 
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/Performance-issue-when-removing-high-amount-of-nodes-tp3175050p3175050.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: AW: Performance issue when removing high amount of nodes

2011-01-05 Thread docxa

Ok, I was afraid I had to do something like that.

Thank you very much for your answer.

A. Mariette
DOCXA
-- 
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/Performance-issue-when-removing-high-amount-of-nodes-tp3175050p3175420.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: AW: Performance issue when removing high amount of nodes

2011-01-05 Thread Alexander Klimetschek
On 05.01.11 13:35, docxa d...@docxa.com wrote:
Ok, I was afraid I had to do something like that.

In any way it is very useful to have a meaningful naming of your content
structure and not just a numbered list. Note that the common use of Ids
only comes from the technical way RDMBS works, not necessarily from the
(human) data models.

So using some structure of your record data (like owner for example, using
access control and containment as drivers, see also rule #2 of
http://wiki.apache.org/jackrabbit/DavidsModel ) to build nested folders.

In nearly all cases you have some kind of date or timestamp on your data
and you could use 2011/01/05 as folders, for example - but only if you
don't have a better structure of course.

Regards,
Alex

-- 
Alexander Klimetschek
Developer // Adobe (Day) // Berlin - Basel






Re: AW: Performance issue when removing high amount of nodes

2011-01-05 Thread Robert Oschwald
 
 
 Jackrabbit keeps all children of a node in memory, so you should not have 
 more than 10.000 direct child nodes at a node. (See 
 http://jackrabbit.510166.n4.nabble.com/Suggestions-for-node-hierarchy-td2966757.html)
 


Jira Ticket for this problem is https://issues.apache.org/jira/browse/JCR-642






problem with removeMixin

2011-01-05 Thread PALMER, THOMAS C (ATTCORP)
We have a CMS application (Jackrabbit 2.2.0) that uses mix:versionable
nodes in the draft workspace but then tries to strip the mix-in when
promoting to the next workspace in the workflow.  For code like the
following:

 

1  if (node.isNodeType(mix:versionable)) {

2  node.removeMixin(mix:versionable);

3  }

 

Even when the check in line 1 passes, we still get the following
exception at line 2:

 

Caused by: javax.jcr.nodetype.NoSuchNodeTypeException: Mixin
mix:versionable not included in node /a/b/c

at
org.apache.jackrabbit.core.RemoveMixinOperation.perform(RemoveMixinOpera
tion.java:87)

at
org.apache.jackrabbit.core.session.SessionState.perform(SessionState.jav
a:200)

at
org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)

at
org.apache.jackrabbit.core.NodeImpl.removeMixin(NodeImpl.java:926)

at
org.apache.jackrabbit.core.NodeImpl.removeMixin(NodeImpl.java:2330)

 

Any ideas?  Thanks -

 



Number of cluster nodes

2011-01-05 Thread Benjamin Papez

Hello,

is there a way to find out at runtime with an API (or some custom 
extensions) how many Jackrabbit cluster nodes are currently accessing a 
repository ?


Regards,
Benjamin


Re: problem with removeMixin

2011-01-05 Thread Alexander Klimetschek
On 05.01.11 16:35, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote:
ries to strip the mix-in when
promoting to the next workspace in the workflow.

Just a guss: was the mixin added through Node.addMixin() (then it should
work, if the version history is gone, afaik) or is it already part of the
node's primary node type (then you can't remove a mixin from a certain
node)?

Regards,
Alex

-- 
Alexander Klimetschek
Developer // Adobe (Day) // Berlin - Basel






RE: problem with removeMixin

2011-01-05 Thread PALMER, THOMAS C (ATTCORP)
It's part of the node definition:

[att:cmsFolder]  nt:base, mix:versionable, mix:deletable 
 - att:nodeTypeKey (STRING) mandatory
 - * (UNDEFINED) multiple
 - * (UNDEFINED)
 + * (att:cmsContent) VERSION
 + * (att:cmsFolder) VERSION

 [att:cmsContent]  nt:base, mix:versionable, mix:deletable
 - att:nodeTypeKey (STRING) mandatory
 - * (UNDEFINED) multiple
 - * (UNDEFINED)

Thanks -

-Original Message-
From: Alexander Klimetschek [mailto:aklim...@adobe.com] 
Sent: Wednesday, January 05, 2011 10:52 AM
To: users@jackrabbit.apache.org
Subject: Re: problem with removeMixin

On 05.01.11 16:35, PALMER, THOMAS C (ATTCORP) tp3...@att.com wrote:
ries to strip the mix-in when
promoting to the next workspace in the workflow.

Just a guss: was the mixin added through Node.addMixin() (then it should
work, if the version history is gone, afaik) or is it already part of
the
node's primary node type (then you can't remove a mixin from a certain
node)?

Regards,
Alex

-- 
Alexander Klimetschek
Developer // Adobe (Day) // Berlin - Basel






Re: Jackrabbit 1.6.4 locking issue

2011-01-05 Thread Raj
Hi Everyone,

Any pointers would be helpful.

thanks in advance,
Rajeev

On Tue, Dec 21, 2010 at 1:51 PM, Raj db.raj...@gmail.com wrote:


 hi All,

 I am facing deadlock in my web application, which am suspecting is due to
 JackRabbit.
 Any pointers to resolve this are welcome.

 Environment: JackRabbit 1.6.4 on Java 1.6.0_18 (32bit, windows 7 64bit).
 Webapp is deployed under tomcat 5.5.28

 There are 3 user operations initiated sequentially with 4-5 seconds delay.
 All the 3  operations are similar in nature, they first read data from
 jackrabbit repo and copy that data.

 http-80-Processor18 - 1st operation,
 http-80-Processor19 - 2nd
 http-80-Processor25 - is 3rd.

 Relevant thread dump is posted below.

 Questions/Observations -
 Q1: processor 18 is waiting for lock 0x0c9ffda8 when it has already
 acquired it !
 Q2: processor 19 is waiting for 0x0c9de878 while it is already acquired by
 processor 25.
 Q3: processor 25 is waiting for lock 0x0c9ffd98 when it has already
 acquired it !

 regards,
 Rajeev

 thread dump
 
   http-80-Processor18
   daemon
   prio=6 tid=0x3c80c000
   nid=0xce0
   in Object.wait()
   [0x41e9d000]
  java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on 0x0c9ffda8
   (a
 EDU.oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$WriterLock)
at java.lang.Object.wait(Object.java:485)
at
 EDU.oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$WriterLock.acquire(Unknown
 Source)
- locked 0x0c9ffda8
   (a
 EDU.oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$WriterLock)
at
 org.apache.jackrabbit.core.state.DefaultISMLocking$WriteLockImpl.init(DefaultISMLocking.java:76)
at
 org.apache.jackrabbit.core.state.DefaultISMLocking$WriteLockImpl.init(DefaultISMLocking.java:70)
at
 org.apache.jackrabbit.core.state.DefaultISMLocking.acquireWriteLock(DefaultISMLocking.java:64)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager.acquireWriteLock(SharedItemStateManager.java:1836)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager.access$200(SharedItemStateManager.java:116)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager$Update.begin(SharedItemStateManager.java:558)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager.beginUpdate(SharedItemStateManager.java:1473)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1503)
at
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351)
at
 org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
at
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:326)
at
 org.apache.jackrabbit.core.BatchedItemOperations.update(BatchedItemOperations.java:155)
at
 org.apache.jackrabbit.core.WorkspaceImpl.internalCopy(WorkspaceImpl.java:448)
at
 org.apache.jackrabbit.core.WorkspaceImpl.copy(WorkspaceImpl.java:666)
at
 com.thed.repository.TestcaseContentsManager.copyTestCases(TestcaseContentsManager.java:354)

 
   http-80-Processor19
   daemon
   prio=6 tid=0x3c80c400
   nid=0x15e0
   waiting for monitor entry
   [0x41f2d000]
  java.lang.Thread.State: BLOCKED (on object monitor)
at
 org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(LocalItemStateManager.java:167)
- waiting to lock 0x0c9de878
   (a org.apache.jackrabbit.core.state.LocalItemStateManager)
at
 org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:198)
at
 org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:352)
at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:298)
at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:562)
- locked 0x0c9de2c8
   (a org.apache.jackrabbit.core.ItemManager)
at
 org.apache.jackrabbit.core.lock.LockManagerImpl.refresh(LockManagerImpl.java:1092)
at
 org.apache.jackrabbit.core.lock.LockManagerImpl.nodeAdded(LockManagerImpl.java:1122)
at
 org.apache.jackrabbit.core.lock.LockManagerImpl.onEvent(LockManagerImpl.java:1025)
at
 org.apache.jackrabbit.core.observation.EventConsumer.consumeEvents(EventConsumer.java:244)
at
 org.apache.jackrabbit.core.observation.ObservationDispatcher.dispatchEvents(ObservationDispatcher.java:201)
at
 org.apache.jackrabbit.core.observation.EventStateCollection.dispatch(EventStateCollection.java:474)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager$Update.end(SharedItemStateManager.java:780)
at
 org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1503)
at
 org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:351)
at
 

Import export fails for authorizable node

2011-01-05 Thread aimran

Hi I am trying to export and import a workspace [system view] using the API. 

However I am getting a strange error when trying to save the import data:

 javax.jcr.nodetype.ConstraintViolationException:
/jcr:root/rep:security/rep:authorizables/rep:users/u/u1/u...@test.com:
mandatory property {internal}password does not exist
at
org.apache.jackrabbit.core.ItemImpl.validateTransientItems(ItemImpl.java:464)
at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:1097)
at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:920)


The exported XML seems to have the password field for the said user:
sv:node sv:name=u...@test.com
−
sv:property sv:name=jcr:primaryType sv:type=Name
sv:valuerep:User/sv:value
/sv:property
−
sv:property sv:name=jcr:uuid sv:type=String
sv:value9a0dab63-beca-3def-b3c2-b73a878fe095/sv:value
/sv:property
−
sv:property sv:name=email sv:type=String
sv:valueu...@test.com/sv:value
/sv:property
−
sv:property sv:name=jcr:created sv:type=Date
sv:value2011-01-05T13:08:12.890-06:00/sv:value
/sv:property
−
sv:property sv:name=jcr:createdBy sv:type=String
sv:valueadmin/sv:value
/sv:property
−
sv:property sv:name=rep:password sv:type=String
sv:value{sha1}5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8/sv:value
/sv:property
−
sv:property sv:name=rep:principalName sv:type=String
sv:valueu...@test.com/sv:value
/sv:property
/sv:node

Is there a solution for this? Is this a bug?

Also, another related question:
I understand that jcr:system cannot be exported/imported... can the
jcr:security and jcr:policy be exported/imported?

Thanks,
Imran
-- 
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/Import-export-fails-for-authorizable-node-tp3176272p3176272.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: howto deploy jackrabbit 2.2 into netbeans

2011-01-05 Thread CarpeDiesG

In my case, Netbeans defines in root.xml that the application will be
deployed at:
C:\Projekte\rwk\target\rwk-1.0-SNAPSHOT
this directory contains two directories: META-INF and WEB-INF.
META-INF contains only the file context.xml and WEB-INF contains web.xml and
the two directories classes an lib.
In the lib-directory there are the files: jcr-2.0.jar and some
jackrabbit*.jar files.

my concret question is: where do I've have to put the configuration
described at jackrabbit website:

In Tomcat 5.5, add the following snippet in your application's
WEB-INF/context.xml or
$CATALINA_HOME/conf/enginename/hostname/webappname.xml. If you prefer
central configuration, you can add the configuration to the Host Context
section in the server.xml.

I do not have that files at the dexcribed position?
-- 
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/howto-deploy-jackrabbit-2-2-into-netbeans-tp3171343p3176274.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.


Re: Jackrabbit 1.6.4 locking issue

2011-01-05 Thread shailesh mangal
Unfortunately issue still exists, even after we moved to 2.2. here is my 
original post. 

http://dev.day.com/discussion-groups/content/lists/jackrabbit-users/2010-12/2010-12-29_Concurrency_issues_after_upgrading_to_2_2_from_1_6_shailesh_mangal.html


In our use case, we allow bulk operations where multiple users add nodes to the 
same parent. Each user request runs on its own thread and doesnt share the 
session (common authentication credentials though). 

We are very hard pressed to fix this issue and considering all alternatives at 
this point. Any help is highly appreciated.

-Shailesh




From: Jukka Zitting jukka.zitt...@gmail.com
To: users@jackrabbit.apache.org
Sent: Wed, January 5, 2011 8:32:07 AM
Subject: Re: Jackrabbit 1.6.4 locking issue

Hi,

On Wed, Jan 5, 2011 at 5:19 PM, Raj db.raj...@gmail.com wrote:
 Any pointers would be helpful.

This seems like one deadlock scenario I fixed as a part of JCR-2699
[1]. Can you check if the problem still occurs with the latest 2.x
releases?

See also JCR-2791 [2] for some relevant followup discussion.

[1] https://issues.apache.org/jira/browse/JCR-2699
[2] https://issues.apache.org/jira/browse/JCR-2791

BR,

Jukka Zitting


How to configure using BindableRepository instead of TransientRepository

2011-01-05 Thread longkas

Hi,

I deployed jcr-ds.xml and jackrabbit-jca.rar (version 2.1.0) in JBoss
server, when I got the repository:

InitialContext ctx = new InitialContext();
Repository repository = (Repository)
ctx.lookup(java:jcr/local);

The repository is an instance of TransientReposity, I tried to find some
place to configure using BindableRepository. I guess its not in
repository.xml, how can I do ?

BR,
Archie


-- 
View this message in context: 
http://jackrabbit.510166.n4.nabble.com/How-to-configure-using-BindableRepository-instead-of-TransientRepository-tp3176961p3176961.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.