On 03/04/2018 08:37 PM, Tom Lane wrote: > Tomas Vondra <tomas.von...@2ndquadrant.com> writes: >> FWIW I originally constructed the tests merely to verify that the fix >> actually fixes the issue, but I think we should add some tests to make >> sure it does not get broken in the future. It took quite a bit of time >> until someone even hit the issue, so a breakage may easily get unnoticed >> for a long time. > > Oh, well, that was another problem I had with it: those tests do basically > nothing to ensure that we won't add another such problem in the future. > If somebody added support for a new type FOO, and forgot to ensure that > FOO-vs-not-FOO cases behaved sanely, these tests wouldn't catch it. > (At least, not unless the somebody added a FOO-vs-not-FOO case to these > tests, but it's hardly likely they'd do that if they hadn't considered > the possibility.) >
I don't follow. How would adding new custom types break the checks? If someone adds a new type along with operators for comparing it with the built-in types (supported by convert_to_scalar), then surely it would hit a code path tested by those tests. > I think actually having put in the coding patterns for what to do with > unsupported cases is our best defense against similar oversights in > future: probably people will copy those coding patterns. Maybe the bytea > precedent proves that some people will fail to think about it no matter > what, but I don't know what more we can do. > Maybe. It's true the tests served more like a safety against someone messing with convert_to_scalar (e.g. by adding support for new types), and undoing the fix for the current data types in some way without realizing it. It wouldn't catch issues in handling of additional data types, of course (like the bytea case). So perhaps the best thing we can do is documenting this in the comment before convert_to_scalar? kind regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services