infinispan-core and its dependencies would need to be bundled as modules using the same module descriptors as the server.
----- Original Message ----- > From: "Tristan Tarrant" <[email protected]> > To: "infinispan -Dev List" <[email protected]> > Cc: "Kurt T Stam" <[email protected]>, "Stelios Koussouris" > <[email protected]>, "Richard Achmatowicz" > <[email protected]> > Sent: Thursday, October 2, 2014 10:38:21 AM > Subject: Re: [infinispan-dev] Clustering standalone Infinispan w/ WF running > Infinispan > > But then the module identifier wouldn't make sense: if you are embedding > infinispan-core.jar, it would definitely not send "org.infinispan:main" > as module:slot, which is what server needs instead. > > Tristan > > > On 02/10/14 16:06, Paul Ferraro wrote: > > The only other obvious alternative of which I can think is to actually > > start the application which uses embedded Infinispan using jboss-modules. > > That way you don't need to hack the behavior of ModularClassResolver. > > > > ----- Original Message ----- > >> From: "Tristan Tarrant" <[email protected]> > >> To: "Stelios Koussouris" <[email protected]> > >> Cc: "Kurt T Stam" <[email protected]>, "infinispan -Dev List" > >> <[email protected]>, "Richard > >> Achmatowicz" <[email protected]> > >> Sent: Thursday, October 2, 2014 9:21:07 AM > >> Subject: Re: [infinispan-dev] Clustering standalone Infinispan w/ WF > >> running Infinispan > >> > >> I have successfully created a "hybrid" cluster between an application > >> using Infinispan in embedded mode and an Infinispan server by doing the > >> following on the embedded side: > >> > >> - use a JGroups Channel wrapped in a MuxHandler > >> - use a custom class resolver which simulates (or rather... hacks) the > >> behaviour of the ModularClassResolver when not using modules > >> > >> You can find the code at my personal GitHub repo: > >> > >> https://github.com/tristantarrant/infinispan-playground/tree/master/src/main/java/net/dataforte/infinispan/hybrid > >> > >> suggestions and improvements are welcome. > >> > >> Tristan > >> > >> On 30/09/14 10:01, Stelios Koussouris wrote: > >>> Hi, > >>> > >>> To give a bit of context on this. We are doing a POC where the customer > >>> wishes to utilize JDG to speed up their application. We need (due to some > >>> customer requirements) to cluster > >>> EMBEDDED JDG (infinispan library mode) with REMOTE JDG (Infinispan > >>> Server) > >>> nodes. The infinispan jars should be the same as they are only libraries > >>> and they > >>> are on the same version. However, during "clustering" of the caches we > >>> started seeing errors which looked like there were due to the fact that > >>> the clustering of the caches contained different > >>> info between the 2 types of cache instantiation (embedded vs server). > >>> > >>> The result was to for a suggestion to create our own MuxChannel (I don't > >>> know if we have any other alternatives at this stage to cluster embedded > >>> with server infinispan caches) but at the moment we are facing > >>> https://gist.github.com/skoussou/5edc5689446b67f85ae8 > >>> > >>> Regards, > >>> > >>> Stylianos Kousouris > >>> Red Hat Middleware Consultant > >>> > >>> ----- Original Message ----- > >>> From: "Tristan Tarrant" <[email protected]> > >>> To: "infinispan -Dev List" <[email protected]>, "Kurt T > >>> Stam" > >>> <[email protected]> > >>> Cc: "Stelios Koussouris" <[email protected]>, "Richard Achmatowicz" > >>> <[email protected]> > >>> Sent: Tuesday, 30 September, 2014 8:02:27 AM > >>> Subject: Re: [infinispan-dev] Clustering standalone Infinispan w/ WF > >>> running Infinispan > >>> > >>> I don't know what Kurt is doing, but Stelios is attempting to cluster an > >>> application using embedded Infinispan deployed within WF together with > >>> an Infinispan Server instance. > >>> The application is managing its own caches, and therefore it is not > >>> interacting with the underlying Infinispan and JGroups subsystems in WF. > >>> Infinispan Server uses its Infinispan and JGroups subsystems (which are > >>> forked from WF's) and therefore are using MuxChannels. > >>> > >>> I told Stelios to use a MuxChannel-wrapped Channel in his application > >>> and it solved part of the issue (he was initially importing the one > >>> included in the WF's jgroups subsystem, but now he's using his local > >>> copy), but now he has run into further problems and I believe what Paul > >>> & Dennis have written might be correct. > >>> > >>> The code that configures this is in > >>> EmbeddedCacheManagerConfigurationService: > >>> > >>> GlobalConfigurationBuilder builder = new GlobalConfigurationBuilder(); > >>> ModuleLoader moduleLoader = this.dependencies.getModuleLoader(); > >>> builder.serialization().classResolver(ModularClassResolver.getInstance(moduleLoader)); > >>> > >>> I don't know how you'd get a ModuleLoader from within a WF deployment, > >>> but I'm sure it can be done. > >>> > >>> Tristan > >>> > >>> On 29/09/14 18:57, Paul Ferraro wrote: > >>>> You should not need to use a MuxChannel. This would only be necessary > >>>> if > >>>> there are other EAP services sharing the channel. Using a MuxChannel > >>>> allows your standalone Infinispan instance to filter these irrelevant > >>>> messages. However, in JDG, there should be no other services other than > >>>> Infinispan using the channel - hence the MuxChannel stuff is > >>>> unnecessary. > >>>> > >>>> I think Dennis earlier response was spot on. EAP/JDG configures it's > >>>> cache managers using a ModularClassResolver (which includes a module > >>>> name > >>>> along with the class name when marshalling). Your standalone Infinispan > >>>> instances do not use this and therefore cannot make sense of the message > >>>> body. > >>>> > >>>> Paul > >>>> > >>>> ----- Original Message ----- > >>>>> From: "Kurt T Stam" <[email protected]> > >>>>> To: "Stelios Koussouris" <[email protected]>, "Radoslav Husar" > >>>>> <[email protected]> > >>>>> Cc: "Galder Zamarreño" <[email protected]>, "Paul Ferraro" > >>>>> <[email protected]>, "Richard Achmatowicz" > >>>>> <[email protected]>, "infinispan -Dev List" > >>>>> <[email protected]> > >>>>> Sent: Monday, September 29, 2014 11:39:59 AM > >>>>> Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan > >>>>> > >>>>> Thanks for following up Stelios, I think Galder is traveling the next 2 > >>>>> weeks. > >>>>> > >>>>> So - do we need fixes on both ends then so that the boot order does not > >>>>> matter? In which project(s) would we apply > >>>>> there changes? Or can they be applied in the end-user's code? > >>>>> > >>>>> Thx, > >>>>> > >>>>> --Kurt > >>>>> > >>>>> > >>>>> > >>>>> On 9/26/14, 11:19 AM, Stelios Koussouris wrote: > >>>>>> Hi, > >>>>>> > >>>>>> Rado: It is both ways. ie. if I start first the JDG Server I get the > >>>>>> issue > >>>>>> on the library mode side when I start that one. If reverse the order > >>>>>> of > >>>>>> startup I get it in the JDG Server side. > >>>>>> > >>>>>> Question: > >>>>>> ----------------------------------------------------------------------------------------------------------------------- > >>>>>> ...IMO the channel needs to be wrapped as > >>>>>> org.jboss.as.clustering.jgroups.MuxChannel before passing to > >>>>>> infinispan. > >>>>>> ... > >>>>>> ----------------------------------------------------------------------------------------------------------------------- > >>>>>> For now that this is not being done. If I wanted to do it manually on > >>>>>> the > >>>>>> library side where I can create the protocol programmatically we are > >>>>>> talking about something like this? > >>>>>> > >>>>>> ProtocolStackConfigurator configurator = > >>>>>> ConfiguratorFactory.getStackConfigurator("jgroups-udp.xml"); > >>>>>> MuxChannel channel = new MuxChannel(configurator); > >>>>>> org.infinispan.remoting.transport.Transport transport = new > >>>>>> org.infinispan.remoting.transport.jgroups.JGroupsTransport(channel); > >>>>>> > >>>>>> .... > >>>>>> then replace the below > >>>>>> new > >>>>>> GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport().clusterName("UDM-CLUSTER").addProperty("configurationFile", > >>>>>> "jgroups-udp.xml") > >>>>>> > >>>>>> WITH > >>>>>> new > >>>>>> GlobalConfigurationBuilder().clusteredDefault().globalJmxStatistics().cacheManagerName("RDSCacheManager").allowDuplicateDomains(true).enable().transport(Transport).clusterName("UDM-CLUSTER") > >>>>>> > >>>>>> Btw, someone mentioned that if I follow this method I need to to know > >>>>>> the > >>>>>> assigned mux ids, but that is not quite clear what it means with > >>>>>> regards > >>>>>> to the JGroupsTransport configuration > >>>>>> > >>>>>> Thanks, > >>>>>> > >>>>>> Stylianos Kousouris > >>>>>> Red Hat Middleware Consultant > >>>>>> > >>>>>> ----- Original Message ----- > >>>>>> From: "Radoslav Husar" <[email protected]> > >>>>>> To: "Galder Zamarreño" <[email protected]>, "Paul Ferraro" > >>>>>> <[email protected]> > >>>>>> Cc: "Richard Achmatowicz" <[email protected]>, "infinispan -Dev > >>>>>> List" > >>>>>> <[email protected]>, "Stelios Koussouris" > >>>>>> <[email protected]>, "Kurt T Stam" <[email protected]> > >>>>>> Sent: Friday, 26 September, 2014 3:47:16 PM > >>>>>> Subject: Re: Clustering standalone Infinispan w/ WF running Infinispan > >>>>>> > >>>>>> From what Stelios is telling me the question is a little bit > >>>>>> other > >>>>>> way > >>>>>> round: he is using library mode infinispan and jgroups in EAP and > >>>>>> connecting to JDG. So the question is what JDG is doing with the > >>>>>> stack, > >>>>>> not AS/WF as its infinispan/jgroups subsystem is not used. > >>>>>> > >>>>>> Unfortunately I don't have access to the JDG repo so I don't know what > >>>>>> changes have been made there but if you are using the same jgroups > >>>>>> logic, IMO the channel needs to be wrapped as > >>>>>> org.jboss.as.clustering.jgroups.MuxChannel before passing to > >>>>>> infinispan. > >>>>>> > >>>>>> Rado > >>>>>> > >>>>>> On 26/09/14 15:03, Galder Zamarreño wrote: > >>>>>>> Hey Paul, > >>>>>>> > >>>>>>> In the last couple of days, a couple of people have encountered the > >>>>>>> exception in [1] when trying to cluster a standalone Infinispan app > >>>>>>> with > >>>>>>> its own JGroups configuration file with a AS/WF running Infinispan > >>>>>>> cache. > >>>>>>> > >>>>>>> From my POV, 3 possible causes: > >>>>>>> > >>>>>>> 1. Dependency mismatches between AS/WF and the standalone app. Having > >>>>>>> done > >>>>>>> some quick study of Kurt’s case, apart from micro version changes, > >>>>>>> all > >>>>>>> looks good. > >>>>>>> > >>>>>>> 2. Mismatch in the Infinispan and/or JGroups configuration file. > >>>>>>> > >>>>>>> 3. AS/WF puts something on the clustered wire that standalone > >>>>>>> Infinispan > >>>>>>> does not expect. Are you still doing multiplexing? Could you be > >>>>>>> adding > >>>>>>> extra info to the wire? > >>>>>>> > >>>>>>> With this email, I’m trying to get some clarification from you if the > >>>>>>> issue could be due to 3rd option. If it’s either of the first two, > >>>>>>> it’s > >>>>>>> a > >>>>>>> matter of digging and finding the difference, but if it’s 3rd one, > >>>>>>> it’s > >>>>>>> more problematic. > >>>>>>> > >>>>>>> Any ideas? > >>>>>>> > >>>>>>> [1] https://gist.github.com/skoussou/92f062f2d0bd17168e01 > >>>>>>> -- > >>>>>>> Galder Zamarreño > >>>>>>> [email protected] > >>>>>>> twitter.com/galderz > >>>>>>> > >>>> _______________________________________________ > >>>> infinispan-dev mailing list > >>>> [email protected] > >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev > >> _______________________________________________ > >> infinispan-dev mailing list > >> [email protected] > >> https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > > infinispan-dev mailing list > > [email protected] > > https://lists.jboss.org/mailman/listinfo/infinispan-dev > > _______________________________________________ > infinispan-dev mailing list > [email protected] > https://lists.jboss.org/mailman/listinfo/infinispan-dev _______________________________________________ infinispan-dev mailing list [email protected] https://lists.jboss.org/mailman/listinfo/infinispan-dev
