I understand the issues of managing different versions of two correlated 
components — but it is possible to create unit tests with core components of 
both. It takes more effort but it is possible.

That being said, in my experience using Reaper and in the DataStax distribution 
, using OpsCenter , I prefer a separate project that is loosely tied to the 
system and not connected at the hips. Whenever there is an update to Reaper or 
OpsCenter, I can always pull it down and test it before rolling it out - and 
this is much more frequently than if I were rolling out updates to a C* cluster.


Rahul
On Aug 17, 2018, 9:41 AM -0700, Jonathan Haddad <j...@jonhaddad.com>, wrote:
> Speaking from experience (Reaper), I can say that developing a sidecar
> admin / repair tool out of tree that's compatible with multiple versions
> really isn't that big of a problem. We've been doing it for a while now.
>
> https://github.com/thelastpickle/cassandra-reaper/blob/master/.travis.yml
>
> On Fri, Aug 17, 2018 at 9:39 AM Joseph Lynch <joe.e.ly...@gmail.com> wrote:
>
> > While I would love to use a different build system (e.g. gradle) for the
> > sidecar, I agree with Dinesh that a separate repo would make sidecar
> > development much harder to verify, especially on the testing and
> > compatibility front. As Jeremiah mentioned we can always choose later to
> > release the sidecar artifact separately or more frequently than the main
> > server regardless of repo choice and as per Roopa's point having a separate
> > release artifact (jar or deb/rpm) is probably a good idea until the default
> > Cassandra packages don't automatically stop and start Cassandra on install.
> >
> > While we were developing the repair scheduler in a separate repo we had a
> > number of annoying issues that only started surfacing once we started
> > merging it directly into the trunk tree:
> > 1. It is hard to compile/test against unreleased versions of Cassandra
> > (e.g. the JMX interfaces changed a lot with 4.x, and it was pretty tricky
> > to compile against that as the main project doesn't release nightly
> > snapshots or anything like that, so we had to build local trunk jars and
> > then manually dep them).
> > 2. Integration testing and cross version compatibility is extremely hard.
> > The sidecar is going to be involved in multi node coordination (e.g.
> > monitoring, metrics, maintenance) and will be tightly coupled to JMX
> > interface choices in the server and trying to make sure that it all works
> > with multiple versions of Cassandra is much harder if it's a separate repo
> > that has to have a mirroring release cycle to Cassandra. It seems much
> > easier to have it in tree and just be like "the in tree sidecar is tested
> > against that version of Cassandra". Every time we cut a Cassandra server
> > branch the sidecar branches with it.
> >
> > We experience these pains all the time with Priam being in a separate repo,
> > where every time we support a new Cassandra version a bunch of JMX
> > endpoints break and we have to refactor the code to either call JMX methods
> > by string or cut a new Priam branch. A separate artifact is pretty
> > important, but a separate repo just allows drifts. Furthermore from the
> > Priam experience I also don't think it's realistic to say one version of a
> > sidecar artifact can actually support multiple versions.
> >
> > -Joey
> >
> > On Fri, Aug 17, 2018 at 12:00 PM Jeremiah D Jordan <jerem...@datastax.com>
> > wrote:
> >
> > > Not sure why the two things being in the same repo means they need the
> > > same release process. You can always do interim releases of the
> > management
> > > artifact between server releases, or even have completely decoupled
> > > releases.
> > >
> > > -Jeremiah
> > >
> > > > On Aug 17, 2018, at 10:52 AM, Blake Eggleston <beggles...@apple.com>
> > > wrote:
> > > >
> > > > I'd be more in favor of making it a separate project, basically for all
> > > the reasons listed below. I'm assuming we'd want a management process to
> > > work across different versions, which will be more awkward if it's in
> > tree.
> > > Even if that's not the case, keeping it in a different repo at this point
> > > will make iteration easier than if it were in tree. I'd imagine (or at
> > > least hope) that validating the management process for release would be
> > > less difficult than the main project, so tying them to the Cassandra
> > > release cycle seems unnecessarily restrictive.
> > > >
> > > >
> > > > On August 17, 2018 at 12:07:18 AM, Dinesh Joshi (
> > dinesh.jo...@yahoo.com.invalid)
> > > wrote:
> > > >
> > > > > On Aug 16, 2018, at 9:27 PM, Sankalp Kohli <kohlisank...@gmail.com>
> > > wrote:
> > > > >
> > > > > I am bumping this thread because patch has landed for this with repair
> > > functionality.
> > > > >
> > > > > I have a following proposal for this which I can put in the JIRA or
> > doc
> > > > >
> > > > > 1. We should see if we can keep this in a separate repo like Dtest.
> > > >
> > > > This would imply a looser coupling between the two. Keeping things
> > > in-tree is my preferred approach. It makes testing, dependency management
> > > and code sharing easier.
> > > >
> > > > > 2. It should have its own release process.
> > > >
> > > > This means now there would be two releases that need to be managed and
> > > coordinated.
> > > >
> > > > > 3. It should have integration tests for different versions of
> > Cassandra
> > > it will support.
> > > >
> > > > Given the lack of test infrastructure - this will be hard especially if
> > > you have to qualify a matrix of builds.
> > > >
> > > > Dinesh
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > > For additional commands, e-mail: dev-h...@cassandra.apache.org
> > > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > For additional commands, e-mail: dev-h...@cassandra.apache.org
> > >
> > >
> >
>
>
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade

Reply via email to