On 14/07/2019 15:53, Martok wrote:
> Am 14.07.2019 um 14:18 schrieb Jonas Maebe:
>>> Side note: if this was done 100% consistently (and it does make sense!), the
>>> PACKENUM directive would be completely useless. There is no point in being 
>>> able
>>> to specify the storage size of an enum when it can be and is ignored at 
>>> will.
>>
>> In other cases, the requested storage size is reserved.
> 
> Oh please, don't troll.

Please refrain from throwing around insults.

> I know that you know that "is reserved" and "is used"
> are not the same thing.

If you want to nitpick, the compiler will perform 1/2/4 byte writes for
enums of those sizes, so the full reserved data is in fact
used/initialised. Again: the only relevant part in this discussion is
the valid values. The reserved/used/accessed/written/... storage size is
unrelated to that to the extent explained in my previous message.

>> The issue with a modifier or directive that would change the definition
>> of which values are valid for an enumerations with this modifier/within
>> this directive, is (besides the inconsistent low/high and range checking
>> behaviour from Delphi this would require,
> This is easy, enums already have separate min/max and a basedef. Make the
> basedef a type that can hold an orddef, modify calcsavesize and packedbitsize
> accordingly, done.
> Low/High will continue to provide the named range.

This inconsistency would mean that we at least need two versions of
defutils.getrange() in the compiler: one for (some) range checks (using
the named range), and one for other reasoning (which must use the
basedef range). There may be other such places that need modifications.

> Arrays of such a type should not be allowed (as for enums with jumps right 
> now)
> (* although they could be without surprises, Array[tmyenum] would simply be
> Array[Byte] *).

The whole point of a directive/modifier would be to get
Delphi-compatible enums, not to add a third variant.

> On Subranges, that's also simple and obvious.

I meant that Delphi's range checking for enums/subranges doesn't make
any sense due to the inconsistency between the "valid" and "declared" range.

E.g., take this Delphi program (tested with Kylix 3; YMMV with modern
Delphi versions, to which I don't have access)

{$z1}
type
  tenum = (ea, eb, ec, ed);
  tsubenum = eb..ed;
  tsubsubenum = ec..ed;
var
  enum, enum2: tenum;
  subenum: tsubenum;
  subsubenum: tsubsubenum;
  arr: array[tenum] of byte;
begin
{$r+}
  // no problem
  enum:=tenum(255);
  // no problem
  enum2:=enum;
  // segmentation fault
  arr[enum]:=2;
  // range error
  subenum:=enum;
  // no problem
  subenum:=tsubenum(255);
  // no problem
  enum:=subenum;
  // segmentation fault
  arr[subenum]:=2;
  // range error
  subsubenum:=subenum;
  // no problem
  subsubenum:=tsubsubenum(255);
  // no problem (!)
  subenum:=subsubenum;
  // no problem
  enum:=subsubenum;
  // segmentation fault
  arr[subsubenum]:=2;
end.

This is both 100% inconsistent and 100% logical.

It is logical because according to the (Delphi and FPC) type system
rules tsubsubenum <= tsubenum <= tenum, and hence every tsubsubenum by
definition is a valid subenum and a valid tenum, every tsubenum is a
valid tenum (and every valid t(sub)(sub)enum is a valid
t(sub)(sub)enum). As a result, you don't need any range checks when
converting them in that direction (an array indexation is also a type
conversion to the index type, hence the segmentation fault rather than
range error). However, the other way round does not necessarily hold,
and hence there are range checks when going in the other direction.

On the other hand, you have this whole inconsistency where 255 is both a
valid and an invalid value. So when a range check happens to be
inserted, you get a range check error, and when it isn't, you don't.

The only way to implement consistent range checking in this scenario is
by having range checks for every single enumeration type conversion,
even between the same type. Although then you get the curious situation
where 255 is at the same time both valid and triggering range check
errors all over the place as soon as you dare to use it. Plus you get a
whole bunch of unnecessary range check operations in code unless you add
a separate optimisation pass to remove them.

I completely understand the practical use case of this type of enums,
but from a type/correctness checking point of view it's utter nonsense.

>> the post I linked near the beginning of
>> the thread:
>> https://forum.lazarus.freepascal.org/index.php/topic,45507.msg322059.html#msg322059
> 
> As I wrote in the last message, those points would be fully addressed by this
> proposal. Choosing at declaration time is the only valid solution, all modes
> will behave exactly the same once the type is defined.

As described in that post, the issue is that the default FPC units
declare various enumeration types without any specific modifiers. Even
if all default FPC units would be changed to Delphi-compatible enums,
then you would still get issues when mixing and matching units declared
with differently declared enumeration types, since there is no obvious
way to know how a particular enumeration type has been declared.


Jonas
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-devel

Reply via email to