Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
On 03/05/2018 08:34 PM, Tom Lane wrote: > Tomas Vondrawrites: >> On 03/04/2018 09:46 PM, Tom Lane wrote: >>> Well, I think the existing bytea bug is a counterexample to that. If >>> someone were to repeat that mistake with, say, UUID, these tests would not >>> catch it, because none of them would exercise UUID-vs-something-else. >>> For that matter, your statement is false on its face, because even if >>> somebody tried to add say uuid-versus-int8, these tests would not catch >>> lack of support for that in convert_to_scalar unless the specific case of >>> uuid-versus-int8 were added to the tests. > >> I suspect we're simply having different expectations what the tests >> could/should protect against - my intention was to make sure someone >> does not break convert_to_scalar for the currently handled types. > > I concur that we could use better test coverage for the existing > code --- the coverage report is pretty bleak there. But I think we > could improve that just by testing with the existing operators. I do > not see much use in checking that unsupported cross-type cases fail > cleanly, because there isn't a practical way to have full coverage > for such a concern. > Hmmm, OK. I'm sure we could improve the coverage of the file in general by using existing operators, no argument here. Obviously, that does not work for failure cases in convert_to_scalar(), because no existing operator can exercise those. I wouldn't call those cases 'unsupported' - they are pretty obviously supported, just meant to handle unknown user-defined data types. Admittedly, the operators in the tests were rather silly but I find them rather practical. In any case, thanks for the discussion! I now understand the reasoning why you did not commit the tests, at least. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Tomas Vondrawrites: > On 03/04/2018 09:46 PM, Tom Lane wrote: >> Well, I think the existing bytea bug is a counterexample to that. If >> someone were to repeat that mistake with, say, UUID, these tests would not >> catch it, because none of them would exercise UUID-vs-something-else. >> For that matter, your statement is false on its face, because even if >> somebody tried to add say uuid-versus-int8, these tests would not catch >> lack of support for that in convert_to_scalar unless the specific case of >> uuid-versus-int8 were added to the tests. > I suspect we're simply having different expectations what the tests > could/should protect against - my intention was to make sure someone > does not break convert_to_scalar for the currently handled types. I concur that we could use better test coverage for the existing code --- the coverage report is pretty bleak there. But I think we could improve that just by testing with the existing operators. I do not see much use in checking that unsupported cross-type cases fail cleanly, because there isn't a practical way to have full coverage for such a concern. regards, tom lane
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
On 03/04/2018 09:46 PM, Tom Lane wrote: > Tomas Vondrawrites: >> On 03/04/2018 08:37 PM, Tom Lane wrote: >>> Oh, well, that was another problem I had with it: those tests do basically >>> nothing to ensure that we won't add another such problem in the future. > >> I don't follow. How would adding new custom types break the checks? If >> someone adds a new type along with operators for comparing it with the >> built-in types (supported by convert_to_scalar), then surely it would >> hit a code path tested by those tests. > > Well, I think the existing bytea bug is a counterexample to that. If > someone were to repeat that mistake with, say, UUID, these tests would not > catch it, because none of them would exercise UUID-vs-something-else. > For that matter, your statement is false on its face, because even if > somebody tried to add say uuid-versus-int8, these tests would not catch > lack of support for that in convert_to_scalar unless the specific case of > uuid-versus-int8 were added to the tests. > I suspect we're simply having different expectations what the tests could/should protect against - my intention was to make sure someone does not break convert_to_scalar for the currently handled types. So for example if someone adds the uuid-versus-int8 operator you mention, then int8_var > '55e65ca2-4136-4a4b-ba78-cd3fe4678203'::uuid simply returns false because convert_to_scalar does not handle UUIDs yet (and you're right none of the tests enforces it), while uuid_var > 123456::int8 is handled by the NUMERICOID case, and convert_numeric_to_scalar returns failure=true. And this is already checked by one of the tests. If someone adds UUID handling into convert_to_scalar, that would need to include a bunch of new tests, of course. One reason why I wanted to include the tests was that we've been talking about reworking this code to allow custom conversion routines etc. In which case being able to verify the behavior stays the same is quite valuable, I think. >> So perhaps the best thing we can do is documenting this in the comment >> before convert_to_scalar? > > I already updated the comment inside it ... > Oh, right. Sorry, I missed that somehow. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Tomas Vondrawrites: > On 03/04/2018 08:37 PM, Tom Lane wrote: >> Oh, well, that was another problem I had with it: those tests do basically >> nothing to ensure that we won't add another such problem in the future. > I don't follow. How would adding new custom types break the checks? If > someone adds a new type along with operators for comparing it with the > built-in types (supported by convert_to_scalar), then surely it would > hit a code path tested by those tests. Well, I think the existing bytea bug is a counterexample to that. If someone were to repeat that mistake with, say, UUID, these tests would not catch it, because none of them would exercise UUID-vs-something-else. For that matter, your statement is false on its face, because even if somebody tried to add say uuid-versus-int8, these tests would not catch lack of support for that in convert_to_scalar unless the specific case of uuid-versus-int8 were added to the tests. > So perhaps the best thing we can do is documenting this in the comment > before convert_to_scalar? I already updated the comment inside it ... regards, tom lane
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
On 03/04/2018 08:37 PM, Tom Lane wrote: > Tomas Vondrawrites: >> FWIW I originally constructed the tests merely to verify that the fix >> actually fixes the issue, but I think we should add some tests to make >> sure it does not get broken in the future. It took quite a bit of time >> until someone even hit the issue, so a breakage may easily get unnoticed >> for a long time. > > Oh, well, that was another problem I had with it: those tests do basically > nothing to ensure that we won't add another such problem in the future. > If somebody added support for a new type FOO, and forgot to ensure that > FOO-vs-not-FOO cases behaved sanely, these tests wouldn't catch it. > (At least, not unless the somebody added a FOO-vs-not-FOO case to these > tests, but it's hardly likely they'd do that if they hadn't considered > the possibility.) > I don't follow. How would adding new custom types break the checks? If someone adds a new type along with operators for comparing it with the built-in types (supported by convert_to_scalar), then surely it would hit a code path tested by those tests. > I think actually having put in the coding patterns for what to do with > unsupported cases is our best defense against similar oversights in > future: probably people will copy those coding patterns. Maybe the bytea > precedent proves that some people will fail to think about it no matter > what, but I don't know what more we can do. > Maybe. It's true the tests served more like a safety against someone messing with convert_to_scalar (e.g. by adding support for new types), and undoing the fix for the current data types in some way without realizing it. It wouldn't catch issues in handling of additional data types, of course (like the bytea case). So perhaps the best thing we can do is documenting this in the comment before convert_to_scalar? kind regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Tomas Vondrawrites: > FWIW I originally constructed the tests merely to verify that the fix > actually fixes the issue, but I think we should add some tests to make > sure it does not get broken in the future. It took quite a bit of time > until someone even hit the issue, so a breakage may easily get unnoticed > for a long time. Oh, well, that was another problem I had with it: those tests do basically nothing to ensure that we won't add another such problem in the future. If somebody added support for a new type FOO, and forgot to ensure that FOO-vs-not-FOO cases behaved sanely, these tests wouldn't catch it. (At least, not unless the somebody added a FOO-vs-not-FOO case to these tests, but it's hardly likely they'd do that if they hadn't considered the possibility.) I think actually having put in the coding patterns for what to do with unsupported cases is our best defense against similar oversights in future: probably people will copy those coding patterns. Maybe the bytea precedent proves that some people will fail to think about it no matter what, but I don't know what more we can do. regards, tom lane
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
On 03/04/2018 05:59 PM, Tom Lane wrote: > Tomas Vondrawrites: >> On 03/04/2018 02:37 AM, Tom Lane wrote: >>> I was kind of underwhelmed with these test cases, too, so I didn't >>> commit them. But they were good for proving that the bytea bug >>> wasn't hypothetical :-) > >> Underwhelmed in what sense? Should the tests be constructed in some >> other way, or do you think it's something that doesn't need the tests? > > The tests seemed pretty ugly, and I don't think they were doing much to > improve test coverage by adding all those bogus operators. Now, a look at > https://coverage.postgresql.org/src/backend/utils/adt/selfuncs.c.gcov.html > says that our test coverage for convert_to_scalar stinks, but we could > (and probably should) improve that just by testing extant operators. > Hmmm, OK. I admit the tests weren't a work of art, but I didn't consider them particularly ugly either. But that's a matter of taste, I guess. Using existing operators was the first thing I considered, of course, but to exercise this part of the code you need an operator that: 1) exercises uses ineq_histogram_selectivity (so either scalarineqsel or prefix_selectivity) 2) has oprright != oprleft 3) either oprright or oprleft is unsupported by convert_to_scalar I don't think we have such operator (built-in or in regression tests). At least I haven't found one, and this was the simplest case that I've been able to come up with. But maybe you had another idea. > A concrete argument for not creating those operators is that they pose a > risk of breaking concurrently-running tests by capturing inexact argument > matches (cf CVE-2018-1058). There are ways to get around that, eg run > the whole test inside a transaction we never commit; but I don't really > think we need the complication. > I don't think the risk of breaking other tests is very high - the operators were sufficiently "bogus" to make it unlikely, I think. But it's simple to mitigate that by either running the test in a transaction, dropping the operators at the end of the test, or just using some sufficiently unique operator name (and not '>'). FWIW I originally constructed the tests merely to verify that the fix actually fixes the issue, but I think we should add some tests to make sure it does not get broken in the future. It took quite a bit of time until someone even hit the issue, so a breakage may easily get unnoticed for a long time. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Tomas Vondrawrites: > On 03/04/2018 02:37 AM, Tom Lane wrote: >> I was kind of underwhelmed with these test cases, too, so I didn't >> commit them. But they were good for proving that the bytea bug >> wasn't hypothetical :-) > Underwhelmed in what sense? Should the tests be constructed in some > other way, or do you think it's something that doesn't need the tests? The tests seemed pretty ugly, and I don't think they were doing much to improve test coverage by adding all those bogus operators. Now, a look at https://coverage.postgresql.org/src/backend/utils/adt/selfuncs.c.gcov.html says that our test coverage for convert_to_scalar stinks, but we could (and probably should) improve that just by testing extant operators. A concrete argument for not creating those operators is that they pose a risk of breaking concurrently-running tests by capturing inexact argument matches (cf CVE-2018-1058). There are ways to get around that, eg run the whole test inside a transaction we never commit; but I don't really think we need the complication. regards, tom lane
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
On 03/04/2018 02:37 AM, Tom Lane wrote: > Tomas Vondrawrites: >> Here is v2 of the fix. It does handle all the convert_to_scalar calls >> for various data types, just like the numeric one did in v1 with the >> exception of bytea. > > Pushed with some adjustments. > Thanks. I see I forgot to tweak a call in btree_gist - sorry about that. >> The bytea case is fixed by checking that the boundary values are >> varlenas. This seems better than checking for BYTEAOID explicitly, which >> would fail for custom varlena-based types. At first I've been thinking >> there might be issues when the data types has mismatching ordering, but >> I don't think the patch makes it any worse. > > I didn't like this one bit. It's unlike all the other cases, which accept > only specific type OIDs, and there's no good reason to assume that a > bytea-vs-something-else comparison operator would have bytea-like > semantics. So I think it's better to punt, pending the invention of an > API to let the operator supply its own convert_to_scalar logic. > OK, understood. That's the "mismatching ordering" I was wondering about. >> I've also added a bunch of regression tests, checking each case. The >> bytea test it should cause segfault on master, of course. > > I was kind of underwhelmed with these test cases, too, so I didn't > commit them. But they were good for proving that the bytea bug > wasn't hypothetical :-) Underwhelmed in what sense? Should the tests be constructed in some other way, or do you think it's something that doesn't need the tests? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Tomas Vondrawrites: > Here is v2 of the fix. It does handle all the convert_to_scalar calls > for various data types, just like the numeric one did in v1 with the > exception of bytea. Pushed with some adjustments. > The bytea case is fixed by checking that the boundary values are > varlenas. This seems better than checking for BYTEAOID explicitly, which > would fail for custom varlena-based types. At first I've been thinking > there might be issues when the data types has mismatching ordering, but > I don't think the patch makes it any worse. I didn't like this one bit. It's unlike all the other cases, which accept only specific type OIDs, and there's no good reason to assume that a bytea-vs-something-else comparison operator would have bytea-like semantics. So I think it's better to punt, pending the invention of an API to let the operator supply its own convert_to_scalar logic. > I've also added a bunch of regression tests, checking each case. The > bytea test it should cause segfault on master, of course. I was kind of underwhelmed with these test cases, too, so I didn't commit them. But they were good for proving that the bytea bug wasn't hypothetical :-) regards, tom lane
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Hi, Here is v2 of the fix. It does handle all the convert_to_scalar calls for various data types, just like the numeric one did in v1 with the exception of bytea. The bytea case is fixed by checking that the boundary values are varlenas. This seems better than checking for BYTEAOID explicitly, which would fail for custom varlena-based types. At first I've been thinking there might be issues when the data types has mismatching ordering, but I don't think the patch makes it any worse. I've also added a bunch of regression tests, checking each case. The bytea test it should cause segfault on master, of course. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services diff --git a/src/backend/utils/adt/network.c b/src/backend/utils/adt/network.c index aac7621..34cf336 100644 --- a/src/backend/utils/adt/network.c +++ b/src/backend/utils/adt/network.c @@ -904,7 +904,7 @@ inet_merge(PG_FUNCTION_ARGS) * involving network types. */ double -convert_network_to_scalar(Datum value, Oid typid) +convert_network_to_scalar(Datum value, Oid typid, bool *unknown_type) { switch (typid) { @@ -960,7 +960,7 @@ convert_network_to_scalar(Datum value, Oid typid) * Can't get here unless someone tries to use scalarineqsel() on an * operator with one network and one non-network operand. */ - elog(ERROR, "unsupported type: %u", typid); + *unknown_type = true; return 0; } diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index fcc8323..b210541 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -176,7 +176,7 @@ static bool estimate_multivariate_ndistinct(PlannerInfo *root, static bool convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, Datum lobound, Datum hibound, Oid boundstypid, double *scaledlobound, double *scaledhibound); -static double convert_numeric_to_scalar(Datum value, Oid typid); +static double convert_numeric_to_scalar(Datum value, Oid typid, bool *unknown_type); static void convert_string_to_scalar(char *value, double *scaledvalue, char *lobound, @@ -193,8 +193,9 @@ static double convert_one_string_to_scalar(char *value, int rangelo, int rangehi); static double convert_one_bytea_to_scalar(unsigned char *value, int valuelen, int rangelo, int rangehi); -static char *convert_string_datum(Datum value, Oid typid); -static double convert_timevalue_to_scalar(Datum value, Oid typid); +static char *convert_string_datum(Datum value, Oid typid, bool *unknown_type); +static double convert_timevalue_to_scalar(Datum value, Oid typid, + bool *unknown_type); static void examine_simple_variable(PlannerInfo *root, Var *var, VariableStatData *vardata); static bool get_variable_range(PlannerInfo *root, VariableStatData *vardata, @@ -4071,10 +4072,18 @@ convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, case REGDICTIONARYOID: case REGROLEOID: case REGNAMESPACEOID: - *scaledvalue = convert_numeric_to_scalar(value, valuetypid); - *scaledlobound = convert_numeric_to_scalar(lobound, boundstypid); - *scaledhibound = convert_numeric_to_scalar(hibound, boundstypid); - return true; + { +bool unknown_type = false; + +*scaledvalue = convert_numeric_to_scalar(value, valuetypid, + _type); +*scaledlobound = convert_numeric_to_scalar(lobound, boundstypid, + _type); +*scaledhibound = convert_numeric_to_scalar(hibound, boundstypid, + _type); + +return (!unknown_type); + } /* * Built-in string types @@ -4085,9 +4094,18 @@ convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, case TEXTOID: case NAMEOID: { -char *valstr = convert_string_datum(value, valuetypid); -char *lostr = convert_string_datum(lobound, boundstypid); -char *histr = convert_string_datum(hibound, boundstypid); +bool unknown_type = false; + +char *valstr = convert_string_datum(value, valuetypid, + _type); +char *lostr = convert_string_datum(lobound, boundstypid, + _type); +char *histr = convert_string_datum(hibound, boundstypid, + _type); + +/* bail out if any of the values is not a string */ +if (unknown_type) + return false; convert_string_to_scalar(valstr, scaledvalue, lostr, scaledlobound, @@ -4103,6 +4121,18 @@ convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, */ case BYTEAOID: { +TypeCacheEntry *typcache; + +/* + * Ensure parameters passed to convert_bytea_to_scalar are + * all varlena. We're dealing with bytea values, but this + * seems like a reasonable way to handle custom types. + */ +typcache = lookup_type_cache(boundstypid, 0); + +if (typcache->typlen != -1) + return false; + convert_bytea_to_scalar(value,
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
On 03/03/2018 01:56 AM, Tom Lane wrote: > Tomas Vondrawrites: >> OK, time to revive this old thread ... [ scalarineqsel may fall over when used by extension operators ] > >> Attached is a minimal fix adding a flag to convert_numeric_to_scalar, >> tracking when it fails because of unsupported data type. If any of the 3 >> calls (value + lo/hi boundaries) sets it to 'true' we simply fall back >> to the default estimate (0.5) within the bucket. > > I think this is a little *too* minimal, because it only covers > convert_numeric_to_scalar and not the other sub-cases in which > convert_to_scalar will throw an error instead of returning "false". > I realize that that would be enough for your use-case, but I think > we need to think more globally. So please fix the other ones too. > Will do. > I notice BTW that whoever shoehorned in the bytea case failed to > pay attention to the possibility that not all three inputs are > of the same type; so that code is flat out broken, and capable > of crashing if fed dissimilar types. We ought to deal with that > while we're at it, since (IMO) the goal is to make convert_to_scalar > fail-soft for any combination of supplied data types. > OK, I'll look into that too. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Tomas Vondrawrites: > OK, time to revive this old thread ... >>> [ scalarineqsel may fall over when used by extension operators ] > Attached is a minimal fix adding a flag to convert_numeric_to_scalar, > tracking when it fails because of unsupported data type. If any of the 3 > calls (value + lo/hi boundaries) sets it to 'true' we simply fall back > to the default estimate (0.5) within the bucket. I think this is a little *too* minimal, because it only covers convert_numeric_to_scalar and not the other sub-cases in which convert_to_scalar will throw an error instead of returning "false". I realize that that would be enough for your use-case, but I think we need to think more globally. So please fix the other ones too. I notice BTW that whoever shoehorned in the bytea case failed to pay attention to the possibility that not all three inputs are of the same type; so that code is flat out broken, and capable of crashing if fed dissimilar types. We ought to deal with that while we're at it, since (IMO) the goal is to make convert_to_scalar fail-soft for any combination of supplied data types. regards, tom lane
Re: [HACKERS] user-defined numeric data types triggering ERROR: unsupported type
OK, time to revive this old thread ... On 09/23/2017 05:27 PM, Tom Lane wrote: > Tomas Vondrawrites: [ scalarineqsel may fall over when used by extension operators ] > >> What about using two-pronged approach: > >> 1) fall back to mid bucket in back branches (9.3 - 10) >> 2) do something smarter (along the lines you outlined) in PG11 > > Sure. We need to test the fallback case anyway. > Attached is a minimal fix adding a flag to convert_numeric_to_scalar, tracking when it fails because of unsupported data type. If any of the 3 calls (value + lo/hi boundaries) sets it to 'true' we simply fall back to the default estimate (0.5) within the bucket. >>> [ sketch of a more extensible design ] > >> Sounds reasonable to me, I guess - I can't really think about anything >> simpler giving us the same flexibility. > > Actually, on further thought, that's still too simple. If you look > at convert_string_to_scalar() you'll see it's examining all three > values concurrently (the probe value, of one datatype, and two bin > boundary values of possibly a different type). The reason is that > it's stripping off whatever common prefix those strings have before > trying to form a numeric equivalent. While certainly > convert_string_to_scalar is pretty stupid in the face of non-ASCII > sort orders, the prefix-stripping is something I really don't want > to give up. So the design I sketched of considering each value > totally independently isn't good enough. > > We could, perhaps, provide an API that lets an operator estimation > function replace convert_to_scalar in toto, but that seems like > you'd still end up duplicating code in many cases. Not sure about > how to find a happy medium. > I plan to work on this improvement next, once I polish a couple of other patches for the upcoming commit fest. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index fcc8323..5f6019a 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -176,7 +176,7 @@ static bool estimate_multivariate_ndistinct(PlannerInfo *root, static bool convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, Datum lobound, Datum hibound, Oid boundstypid, double *scaledlobound, double *scaledhibound); -static double convert_numeric_to_scalar(Datum value, Oid typid); +static double convert_numeric_to_scalar(Datum value, Oid typid, bool *unknown_type); static void convert_string_to_scalar(char *value, double *scaledvalue, char *lobound, @@ -4071,10 +4071,15 @@ convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, case REGDICTIONARYOID: case REGROLEOID: case REGNAMESPACEOID: - *scaledvalue = convert_numeric_to_scalar(value, valuetypid); - *scaledlobound = convert_numeric_to_scalar(lobound, boundstypid); - *scaledhibound = convert_numeric_to_scalar(hibound, boundstypid); - return true; + { + bool unknown_type = false; + + *scaledvalue = convert_numeric_to_scalar(value, valuetypid, _type); + *scaledlobound = convert_numeric_to_scalar(lobound, boundstypid, _type); + *scaledhibound = convert_numeric_to_scalar(hibound, boundstypid, _type); + + return (!unknown_type); + } /* * Built-in string types @@ -4147,7 +4152,7 @@ convert_to_scalar(Datum value, Oid valuetypid, double *scaledvalue, * Do convert_to_scalar()'s work for any numeric data type. */ static double -convert_numeric_to_scalar(Datum value, Oid typid) +convert_numeric_to_scalar(Datum value, Oid typid, bool *unknown_type) { switch (typid) { @@ -4185,9 +4190,11 @@ convert_numeric_to_scalar(Datum value, Oid typid) /* * Can't get here unless someone tries to use scalarineqsel() on an - * operator with one numeric and one non-numeric operand. + * operator with one numeric and one non-numeric operand. Or more + * precisely, operand that we don't know how to transform to scalar. */ - elog(ERROR, "unsupported type: %u", typid); + *unknown_type = true; + return 0; }