[pardon me breaking the email threading here; only just joined] Jan wrote:
> Next step would be to agree on how we bring the current code into our project > and ASF repos > in the best possible way. Questions that arise are: > > 1. Are we allowed to maintain ASF code in a non-ASF repo? If not, how do we > transition to > an ASF git repo? > * Can it be a sub folder in our main repo or does it need to be a > separate repo? The way it works (from the official library’s point of view), is that we maintain https://github.com/docker-library/official-images/blob/master/library/solr <https://github.com/docker-library/official-images/blob/master/library/solr> which contains a link to a repo (in our case https://github.com/docker-solr/docker-solr.git <https://github.com/docker-solr/docker-solr.git>) and particular git commit, and a particular directory for different versions. That is consumed by their build infrastructure. The library team reviews changes we make to that file, and the corresponding changes we made to the Dockerfiles and bash scripts in the docker-solr repo, so it needs to be readily available and it needs to be easy to see what has changed. I think one could theoretically move this into the main Solr repo and point to its GitHub address, but that would make things slower and much harder to review. So I think it’s much better to keep the separate repo. I briefly looked for some official guidance on this, but couldn’t find it spelled out explicitly. I did see https://github.com/docker-library/official-images#maintainership <https://github.com/docker-library/official-images#maintainership> which talks about maintaining git history. Note also that I already use a “docker-solr” GitHub org for the repo, rather than my own account, to make it easier to vary ownership. If you are dead-set to put it into the main repo, I’d run that discussion past the library team first before sinking engineering time. > 2. How will the current build/test/publish process need to change? > * Can we continue using travis for CI? In the short term, sure. Travis has been great for us — it is free, it builds fast enough, the UI is nice, the config is simple, the integration is good, and support was helpful. Last year Travis CI got acquired, followed by layoffs of senior engineering staff, so there are concerns about its future, but nothing has really changed to affect us. I imagine it would be nicer to have it in the normal Apache Jenkins world, but I’m not volunteering for that migration. :-) If we want to stay on Travis, there may be some configuration changes required (roles/permissions/credentials and such that are tied to my account). Oh and just to make it clear: the CI does 2 things: - it sets build status on GitHub commits (although there is currently no enforcement to allow only passing PRs to be merged or things like that, or have review/automerge workflows which would be nice to have) - and it pushes builds to the https://hub.docker.com/repository/docker/dockersolr/docker-solr <https://hub.docker.com/repository/docker/dockersolr/docker-solr> repo — but those are only used for testing, they are not the docker images that provide the official images. I've found that occasionally useful, but we could decide to not do that, or do it differently within the Apache infrastructure. > * Do we need to talk to Docker folks to change repo location? If we keep the same repo, then no :-) If the repo were to change, we’d update that library/solr file, and send a PR. > * Should publishing of new Docker be a RM responsibility, or something > that happens right > after each release like the ref-guide? I don’t have a strong opinion. I typically tried to do it as soon as I became aware of a new version via the solr-user mailing list or twitter. Sometimes same day, sometimes it would take a week because of changes I need to make or extra things I wanted to do. But if I’m more than a few days late someone would be asking about it :-) The official library team review is usually very fast, same day or 24h. > 3. Legal stuff - when we as a project file a PR to update the official solr > docker images, > are we then legally releasing a binary version of Solr? > Technically it is Docker CI that build and publish the images, we just > initiate it… I don’t know about that (or how that matters?) > Do we know any other ASF project that maintain their own official docker > image? I've looked at https://github.com/docker-library/official-images/tree/master/library <https://github.com/docker-library/official-images/tree/master/library> and spotted https://github.com/carlossg/docker-maven <https://github.com/carlossg/docker-maven> which is maintained by an Apache committer. > 4. Practical things - change README, NOTICE, header files, wording etc There is also https://github.com/docker-library/docs/tree/master/solr <https://github.com/docker-library/docs/tree/master/solr> — the individual markdown files there are used to generate https://hub.docker.com/_/solr <https://hub.docker.com/_/solr> > I have opened https://issues.apache.org/jira/browse/SOLR-14168 > <https://issues.apache.org/jira/browse/SOLR-14168> > as an umbrella issue for tasks that spin out from this email thread > discussion. Marcus wrote: > I think that regardless of what the community decides to do with the > docker-solr repo, a good first step would be to add a Docker folder to the > Apache repository that contains a base Dockerfile and a README. In that > README, users can be directed to the location of the docker-solr repo, > wherever that may be, or leverage the Dockerfile in the Apache repo as a > starting point for building their own image. I think that could be useful; but it then does start to become messy almost immediately: Users will expect these self-built images and the official images to work the same, and given that docker-solr has various extra scripts (eg to create collections at startup), you’d then have to copy them into the repo (and now have duplicate maintenance, need to test them). Or you could explicitly decide not to do that, but then your users will be asking how to achieve the same functionality with their images. I would address this as a separate issue. Let’s get the existing image flow taken care of first. — Martijn
