Good day,

I wonder if this community can provide some hints on handling the following.

At a few last projects I was asked to set (or clean) automated builds up,
so they can get (at least) deployable software package(s) after code
changes in a minimum time. Starting from the final desirable results, I was
able to trace down every module, which is needed to build the "master CD".
It was especially easy for Maven-based projects. However discovering these
modules in source repositories were always highlighting:

   - lack of knowledge if a certain module in repository is needed or even
   used in multiple products;
   - duplications of modules with similar purposes - sometimes a conscious
   decision to copy in order to avoid breaking backward compatibility with
   unknown dependents;
   - existence of build jobs for obsolete modules;
   - absence of builds for stable modules, which are not changed during the
   last couple of years
   - and things like these

Assuming all projects in the repositories are maven-ized, how would you
approach determining those, which are required for final deliveries, and
those, which might break if another module changes (sort of reverse
dependency management)?

Do you actually consider this situation as a problem or is it just a
perfectionist talking to me? ;-)

Thank you for your attention,
Viktor

Reply via email to