Dear all
To further add to the discussion: We observed that if component1 references this JpaTemplate service, and component2 references component1, and component2 activate method operation takes time, we see that JpaTemplate for the remaining two persistent units are not registered. This occurs 4 out of 10 times. When we removed this operation inside component2 activate method, we are able to see the JpaTemplate services registered for all three persistent units. So it appears that the bundle startup order/timing has an impact on provisioning of the JpaTemplate services – which it shouldn’t. Is this observed by others too? And how to resolve this? Thanks & Regards, Dheeraj Dear all, after possible misunderstandings, maybe some clarification for this scenario: "deleting the bundle cache" is done BEFORE starting Karaf in this case, to start from a clean status. Only after deleting the data folder, Karaf ist started. When repeating this 10 times (starting Karaf from the "clean" state - i.e. the data folder deleted), ~ 5 times everything starts correctly. The other 5 times, the problem with missing JPATemplate services occurs. To be precise: typically, the JPATemplate service is only missing for two out of three persistence units. When the system is either in the "failure" state or the "correct" state, restarting Karaf WITHOUT deleting the data folder before the new start, it stays in this state. >From this, we have drawn some conclusions: 1. The configuration (data sources, persistence bundles' manifest, persistence units) is basically correct, otherwise it would never start up properly. 2. It looks like somehow connected with some bundle start order, which is "memorized" in the bundle cache. As mentioned above, once the startup order is "correct", it will always start correctly when restarting Karaf without deletion of the data folder. When it is "wrong", we will always end up with missing JPATemplate services. Additional information: - The "raw" data sources are created via pax-jdbc-config. We can always see that they are registered correctly. - The "derived" data sources are created after running Flyway migration scripts on the "raw" data sources. Also those are always registered correctly. The persistence units then use the "derived" data sources. Piping them through Flyway is not done with the ops4j.preHook, but by a component in a "DB migration" bundle. - All application bundles - including the ones which call the Flyway migration - have no start-level specified. I.e. they all start with 80. When we still used Blueprint, we had seen some issues with this, but after migrating to Declarative Services, the resolution of the service dependencies always worked properly. - It is interesting that with up to 2 persistence units, everything works well. When adding the third, the problem shows up now and then. - Don't know whether this is relevant, but the typical scenario is to have 2 persistence units (and data sources) working on one database, different schemas, the third one working on another. - We have seen this mainly for H2 databases, but sometimes also for PostgreSQL. So currenty we don't have a clue why sometimes the JPATemplate services for two out of three persistence units doesn't show up in the service registry. Regards, Jochen Am 19.12.2018 um 12:43 schrieb Dheeraj Guntupalli: Hello All, We are using Karaf 4.1.7, aries-jpa 2.7.0, h2 db in our application. We use multiple persistence units, multiple h2 db files within our application. We observed that when we have multiple h2 db files, and when we restart Karaf after deleting the bundle cache a couple or more times we see this issue "Waiting for JpaTemplate". We are able to reproduce this issue frequently, and when this issue occurs, the JPA Template services are missing in the OSGi service registry. We feel it is related to bundle/service startup/registration timing. Once the bundle cache is built and it works, it will also work on subsequent startups as long as the bundle cache isn’t deleted. The persistence bundle creates the JpaTemplate service by providing the persistence unit, but also consumes it in the DAO services. Possibly the required waiting time before the input data sources become available can trigger the issue. Has anyone faced this issue earlier, or could give any pointers on how to resolve this. Please let us know if you need more info. Thanks & regards, Dheeraj This transmission is intended solely for the addressee and contains confidential information. If you are not the intended recipient, please immediately inform the sender and delete the message and any attachments from your system. Furthermore, please do not copy the message or disclose the contents to anyone unless agreed otherwise. To the extent permitted by law we shall in no way be liable for any damages, whatever their nature, arising out of transmission failures, viruses, external influence, delays and the like.