Others correct me if I'm wrong, but I believe using your multi-topology uber jar to deploy only one topology should not affect the other running topologies that were (in the past) deployed using an earlier version of the same jar. I believe the jar is stored at time of deployment under a unique ID. You also don't strictly need to have a separate main methods for each topology, assuming you parameterize your use of the StormSubmitter. I've done this with success where I have one main method which deploys multiple topologies in a loop. If the topology is already deployed, an exception is thrown (which I catch inside the loop) and continue on. My dev deployment process then becomes: kill topologies I wish to re-deploy, run the one main method which tries to submit all topos, but only succeeds on the ones which had been killed.
On Thu, Jun 26, 2014 at 2:37 PM, Sandon Jacobs <[email protected]> wrote: > We currently have 1 project in GIT containing multiple topologies. With > this model, I use 1 compiled artifact containing several “topology” > classes, each with a main method to configure, build, and submit the > topology. Some of these topologies share some common components (bolt > classes, util classes, etc…). > > I do not necessarily need to deploy the newest version of each topology > ever time we release code. Here are couple of options I have thought of: > > - Break up project into a parent module, keeping 1 GIT repo, with a > child module for common components and child for each topology. > - Break the common code into a GIT repo, then each topology into a GIT > repo (don’t really wanna do this one at all). > - Have the gradle build create a JAR per topology, using > exclusion/inclusion in gradle a task. > > I see PROs and CONs to each approach. I am curious as to how others are > maintaining this model. Do you have a separate compiled artifact for each > topology? Do you use a similar approach to ours? > > Thanks in advance… >
