On Mon, Jul 8, 2019 at 9:57 AM Michael Paquier <mich...@paquier.xyz> wrote: > > On Fri, Jul 05, 2019 at 07:25:41PM +0200, Julien Rouhaud wrote: > > On Fri, Jul 5, 2019 at 6:16 PM Peter Eisentraut > > <peter.eisentr...@2ndquadrant.com> wrote: > >> Isn't that also the case for your proposal? We are not going to release > >> a new reindexdb before a new REINDEX. > > > > Sure, but my point was that once the new reindexdb is released (or if > > you're so desperate, using a nightly build or compiling your own), it > > can be used against any previous major version. There is probably a > > large fraction of users who don't perform a postgres upgrade when they > > upgrade their OS, so that's IMHO also something to consider. > > I think that we need to think long-term here and be confident in the > fact we will still see breakages with collations and glibc, using a > solution that we think is the right API. Peter's idea to make the > backend-aware command of the filtering is cool. On top of that, there > is no need to add any conflict logic in reindexdb and we can live with > restricting --jobs support for non-index objects.
Don't get me wrong, I do agree that implementing filtering in the backend is a better design. What's bothering me is that I also agree that there will be more glibc breakage, and if that happens within a few years, a lot of people will still be using pg12- version, and they still won't have an efficient way to rebuild their indexes. Now, it'd be easy to publish an external tools that does a simple parallel-and-glic-filtering reindex tool that will serve that purpose for the few years it'll be needed, so everyone can be happy. For now, I'll resubmit the parallel patch using per-table only approach, and will submit the filtering in the backend using a new REINDEX option in a different thread.