Regarding 3, Matt, I just started a dev list discussion about configs and
the various components that manage them and how they interact.  Hopefully
we end up in a coherent approach, but in the lead of that, I'd say yes,
valid need for such an architecture.  Please chime in on that thread or
even in reply to this thread (I'll take anything I can get ;) with thoughts.

On Thu, Jan 12, 2017 at 5:49 PM, Matt Foley <ma...@apache.org> wrote:

> I think I hear 3 major areas not adequately covered by our usual “code
> review”:
> 1. Documentation
> 2. Deployment Builds
> 3. Management of config parameters
>
> The other areas mentioned by Otto (testing, perf test, Stellar impact, and
> REST api impact), are entirely valid, but fall under existing code and
> architecture that seems generally adequate.
>
> Regarding #1, Documentation, I’d like to branch a discussion thread for a
> proposal I’m about to make, to enhance our use of README files as usable
> and up-to-date end-user documentation, linked from the Metron site.
> Implicit in that is the idea that we’d deprecate using the cwiki for
> anything but long-lived demonstrations/tutorials that are unlikely to go
> obsolete.
>
> For #2, Deployment Builds:  This is difficult, and unfortunately I’m not
> an expert with these things, but we need to automate this as much as
> possible.  Config params will always interact heavily with deployment
> issues, but let’s leave that for #3 :0)
>
> As far as RPMs, Ansible playbooks, or Docker images go, we’d like to
> automate so that developers never have to do anything when they are
> committing modifications of existing components, and even when new
> components are added (like the Profiler is being added now), it should
> insofar as possible be automated via maven declarations.  But that takes
> input from the experts in each of the areas.
>
> Also, what would people think of dropping Ansible in favor of Ambari and
> Docker as the preferred deployment management approaches?
>
> #3, Management of config parameters:  I’ve been thinking about this
> lately, but haven’t written up a proposal yet.  I’m bothered by the wide
> ranging variability in the way Metron configs are managed: files,
> zookeeper, environment variables, traditional Hadoop-style configs, and
> roll-your-own json configs, sometimes shared, sometimes duplicated, not to
> mention Ambari over it all.  This has been encouraged by the huge number of
> Stack components that Metron depends on, and the relative independence of
> the components Metron itself is composed of.
>
> But I think as Otto points out, as we grow the number of components and
> mature out of the incubator, we have to get this under control.  We need an
> architecture for management of configuration parameters of the Metron
> topologies.  (We can’t do much about the Stack components, but Ambari is
> establishing a culture around managing those.)  The architecture needs to
> include update methodology for semantic changes in parameter sets.
>
> I’m mulling such an architecture, but what do other people think?  Is this
> a valid need?
>
> Thanks,
> --Matt
>
> On 1/12/17, 8:23 AM, "Michael Miklavcic" <michael.miklav...@gmail.com>
> wrote:
>
>     Hi Otto,
>
>     You make a great point.
>
>     AFA RPM/MPack, we do have some work in the pipeline for streamlining
> things
>     a bit with the RPM's and MPack code such that they will be used for
>     performing the Metron install in the sandbox VM's rather than Ansible.
> (I'd
>     search for the public Jiras and post them here, but Jira is down for
>     maintenance currently.) This should help make it obvious that a change
> or
>     new feature requires modifications because they will be in the critical
>     path to testing.
>
>     Documentation is still tricky because we have README files, javadoc,
> and
>     the wiki. But in general I think the current approach is to put
> concrete
>     functionality docs in the READMEs as much as possible because they can
> be
>     tracked and versioned with Git. I think the community has actually been
>     doing a pretty good job here. The wiki is a little more tricky because
>     there is typically only one version, which tracks master, not
> necessarily
>     the latest stable release.
>
>     Mike
>
>
>     On Thu, Jan 12, 2017 at 8:42 AM, Otto Fowler <ottobackwa...@gmail.com>
>     wrote:
>
>     > As Metron evolves to include new deployment options, features, and
>     > configurations it is hard and only getting harder for contributors,
>     > committers, and reviewers to understand what the required changes are
>     > across the different areas of the system to correctly and completely
>     > introduce a change or new feature in the system.
>     >
>     > We have talked some about the requirements or expectations for
> submitters
>     > with regards to tests and coverage, coding style, and documentation
> but I
>     > don’t think we have enough guidance on deployment or other changes
> that
>     > need to be considered.  For committers it is pretty much the same,
> with the
>     > extra stuff around that process.
>     >
>     > Right now it seems as a committer I’m counting on others like Nick
> or Casey
>     > to understand anything that may be missing from a submission when I
> review
>     > it.  Should there by an Ambari/RPM change?   Does this change the
> RestAPI?
>     > Does this effect STELLAR Lang/SHELL?  Does it need customer Docker
> Compose
>     > work?  etc etc.
>     >
>     > I think as we grow the community and try to get out of incubation it
> will
>     > be impractical for us to count on this, and we are even now
> increasing the
>     > risk of regression or functional gaps ( unrealized ) that will have
> an
>     > adverse effect on having a stable master.
>     >
>     > I think we should discuss if and how we can improve this or the
> issue of my
>     > sanity ;).
>     >
>     > What are the criteria that we need to have submitters and reviewers
> have in
>     > mind?
>     > * Test
>     > * Doc
>     > ** Obsoleting of existing documentation/how-to’s ( even hortonworks
> posts )
>     > * Performance
>     > ** How do we test for performance?
>     > *** Standards
>     > *** Tools and processes
>     > * Deployment
>     > ** RPM
>     >   ** Docker
>     > ** Ansible
>     > ** Ambari
>     > ** AWS Script
>     > * Functional
>     > ** STELLAR/Shell
>     > ** REST api’s
>     > * Dev/review guide
>     > ** Does the review / submit guide need to account for it?
>     >
>     > Any thoughts?
>     >
>
>
>
>
>

Reply via email to