On 03/31/2014 12:25 PM, Andres Freund wrote:
On 2014-03-31 10:31:00 +0300, Heikki Linnakangas wrote:
On 03/23/2014 06:07 PM, Greg Stark wrote:
On Sun, Mar 23, 2014 at 1:22 PM, Heikki Linnakangas <[email protected]
wrote:

The reason for the typedef is precisely that an enum is not guaranteed to
be one byte. Tom suggested getting rid of the typedef, but it's needed to
make sure it's stored as one byte.

I'll go add a comment to it, explaining why it's needed.

If we're not using the enum type perhaps it would be better to make it a
bunch of #defines? The main advantage of enum types is that the debugger
knows what values are legal and can decode them for you.

Yeah, I think you're right. Changed it to use #defines, renamed the typedef
to GinTernaryValue as that's more descriptive, and added a brief comment
mentioning that a GinTernaryValue is compatible with a boolean.

It doesn't matter much in this case, but in general I find the reasoning
here less than convincing. For one, debuggers have a much easier time
providing the symbolic name (e.g. p WHATEVER_ENUM_VAL) than purely
defines. Also, it's perfectly possible to switch to the enum in
switch()es, and benefit from compiler warnings. Using defines prohibits
that...

Yeah, I could've gone either way on this case. The enum wasn't actually used anywhere, the typedef was, so debugger wouldn't have been able use the enum values. Same for switch warnings, unless you explicitly cast to the enum type. It seems pretty unlikely that you would remember to do the cast, but forget to handle one of the three values. If there was any chance that we would add more values to the enum in the future, a cast in a switch would be useful to catch places where we forgot to handle the new value, but we're not going to add more values to this enum.

- Heikki


--
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers

Reply via email to