Hi Apache Syncope Team! We use the Syncope 2.0.14 version on our installation.
I noticed that org.apache.syncope.core.provisioning.java.data.DefaultItemTransformer was deleted in 2_1_X branch: https://github.com/apache/syncope/commit/5a1a1f06164bb43d641c2cce9dd7dc954ac475f3 . Previously, the implementation of custom transformer by extending the DefaultItemTransfomer (which in its turn was marked with @Transactional(readonly=true) annotation) meant that it will be automatically a part of an already opened transaction or a new transaction will be opened. In 2_1_X branch, inherited custom transformers will not be transactional by default and you intentionally don't provide this ability out of the box. Did you have any particular reason for doing this? [Q1] The reason why I'm asking this - because we have to read some data using JPADao spring beans in our custom attribute transformer during the user update operation by sending the HTTP request to PUT /syncope/rest/users/c742d863-566d-4193-82d8-63566df193fb. Let's say, we have a custom transformer class: org.apache.syncope.core.provisioning.java.data.CustomTransformer. This transformer extends the previously available DefaultItemTransformer. During the execution of the org.apache.syncope.core.provisioning.java.data.CustomTransformer#beforePropagation an already opened transaction "org.apache.syncope.core.workflow.java.AbstractUserWorkflowAdapter#update" is used , because the AbstractUserWorkflowAdapter is marked with @Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = { Throwable.class }). Before reading any data in the org.apache.syncope.core.provisioning.java.data.CustomTransformer#beforePropagation using autowired JPADAO bean all the user-related data that was fetched from the database and mapped to managed entities (java objects) and then edited in org.apache.syncope.core.provisioning.java.data.UserDataBinderImpl#update is becoming flushed. Here is the explanation of why the flush is happening: Before reading any data the incremental flush org.apache.openjpa.kernel.BrokerImpl#FLUSH_INC is happening for @org.apache.openjpa.kernel.QueryOperations#OP_SELECT@ operation (Symbolic constant that indicates that this query will be performing a select operation, because @org.apache.openjpa.kernel.QueryFlushModes#FLUSH_TRUE@ is enabled). And sometimes (not always) we have an error like: 15:00:46.007 ERROR org.apache.syncope.core.rest.cxf.RestServiceExceptionMapper - Exception thrown org.apache.openjpa.persistence.InvalidStateException: Encountered unmanaged object "org.apache.syncope.core.persistence.jpa.entity.user.JPAUPlainAttrValue@19c497cd" in life cycle state unmanaged while cascading persistence via field "org.apache.syncope.core.persistence.jpa.entity.user.JPAUPlainAttr.values<element:class org.apache.syncope.core.persistence.jpa.entity.user.JPAUPlainAttrValue>" during flush. However, this field does not allow cascade persist. You cannot flush unmanaged objects or graphs that have persistent associations to unmanaged objects. Suggested actions: a) Set the cascade attribute for this field to CascadeType.PERSIST or CascadeType.ALL (JPA annotations) or "persist" or "all" (JPA orm.xml), b) enable cascade-persist globally, c) manually persist the related field value prior to flushing. d) if the reference belongs to another context, allow reference to it by setting StoreContext.setAllowReferenceToSiblingContext(). at org.apache.openjpa.kernel.SingleFieldManager.preFlushPC(SingleFieldManager.java:786) ~[openjpa-kernel-2.4.3.jar:2.4.3] at org.apache.openjpa.kernel.SingleFieldManager.preFlushPCs(SingleFieldManager.java:762) ~[openjpa-kernel-2.4.3.jar:2.4.3] at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:664) ~[openjpa-kernel-2.4.3.jar:2.4.3] at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:589) ~[openjpa-kernel-2.4.3.jar:2.4.3] at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:510) ~[openjpa-kernel-2.4.3.jar:2.4.3] at org.apache.openjpa.kernel.StateManagerImpl.preFlush(StateManagerImpl.java:3055) ~[openjpa-kernel-2.4.3.jar:2.4.3] Have you ever faced with such an error in your ItemTransformers (or other places) that reads the data from the database using the entityManager? [Q2] To solve this problem I decided to mark the org.apache.syncope.core.provisioning.java.data.CustomTransformer#beforePropagation with @Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = { Throwable.class }) annotation. This allows to use a new entity manager (that utilizes another available db connection) by opening a new transaction. In this case, the flush is not happening as a new transaction is opened. I would really appreciate if you could answer on questions [Q1] and [Q2]. Thanks for your answers in advance! Kind Regards, Dmitriy Brashevets
smime.p7s
Description: S/MIME cryptographic signature
