I'd think it should be a quick-start unsecured single instance by default to lower barrier for people new to NiFi but I think we should support more complex configurations with configuration from the user.
E.g. a lot of the database images will run any script in /docker-entrypoint-initdb.d [1][2]. This allows users to extend the dockerfile in any way they see fit. We could do something like /docker-entrypoint-initnifi.d for arbitrary scripts. We could also have a handful of packaged config files that could support clusters, etc for easier use with compose. [1] https://hub.docker.com/_/postgres/ [2] https://hub.docker.com/_/mysql/ On Wed, Dec 28, 2016 at 10:12 AM, Joe Percivall <[email protected]> wrote: > +1 > > Great idea Jeremy. I love even more all of the "I volunteer" statements > throughout, haha. > > Per Joey's point regarding single vs cluster, I'd say it is probably better > to get an mvp of standalone first. Also I'd think the use-cases that would > be using docker wouldn't need to be clustered. Do you see it differently? > > Joe > > On Wed, Dec 28, 2016 at 10:01 AM, Joey Frazee <[email protected]> > wrote: > > > +1 > > > > I think this is a great idea because there are at least half a dozen or > > more Dockerfiles and published images floating around. Having something > > that is endorsed and reviewed by the project should help ensure quality. > > > > One question though: Will the images target a single instance NiFi or a > > cluster, e.g., using compose? Or both? > > > > -joey > > > > > On Dec 28, 2016, at 8:55 AM, Jeremy Dyer <[email protected]> wrote: > > > > > > Team, > > > > > > I wanted to discuss getting an official Apache NiFi Docker image > similar > > to > > > other Apache projects like storm [1], httpd [2], thrift [3], etc. > > > > > > Official Docker images are hosted at http://www.dockerhub.com and made > > > available to the Docker runtime of end users without them having to > build > > > the images themselves. The process of making a Docker image "official", > > > meaning that it is validated and reviewed by a community of Docker > folks > > > for security flaws, best practices, etc, works very closely to how our > > > standard contribution process to NiFi works today. We as a community > > would > > > create our Dockerfile(s) and review them just like we review any JIRA > > today > > > and then commit that against our codebase. > > > > > > There is an additional step from there in that once we have a commit > > > against our codebase we would need an "ambassador" (I happily volunteer > > to > > > handle this if there are no objections) who would open a Github Pull > > > Request against the official docker image repo [4]. Once that PR has > > > successfully been reviewed by the official repo folks it would be > hosted > > on > > > Dockerhub and readily available to end users. > > > > > > In my mind the steps required to reach this goal would be. > > > 1. Create NiFi, MiNiFi, MiNiFi-CPP JIRAs for creating the initial > folder > > > structure and baseline Dockerfiles in each repo. I also volunteer > myself > > to > > > take this on as well. > > > 2. Once JIRA is completed, reviewed, and community thumbs up is given I > > > will request the Dockerhub repo handle of "library/apachenifi" with the > > > maintainer of that repos contact email as <[email protected]> > > > 2a). I suggest we follow the naming structure like > > > "library/apachenifi:nifi-1.1.0", "library/apachenifi:minifi-0.1.0", > > > "libraryapachenifi:minifi-cpp-0.1.0". This makes our official image > much > > > more clean than having 3 separate official images for each subproject. > > > 3) I will open a PR against [4] with our community Dockerfiles > > > 4) After each release I will continue to open pull requests against [4] > > to > > > ensure the latest releases are present. > > > > > > Please let me know your thoughts. > > > > > > [1] - https://hub.docker.com/r/library/storm/ > > > [2] - https://hub.docker.com/_/httpd/ > > > [3] - https://hub.docker.com/_/thrift/ > > > [4] - https://github.com/docker-library/official-images > > > > > > Thanks, > > > Jeremy Dyer > > >
