What are GET_MODE_BITSIZE and GET_MODE_PRECISION for PSImode?
It *should* be 20 and 20 for msp430. But GET_MODE_BITSIZE returns 32,
because it's a macro that does GET_MODE_SIZE * BITS_PER_UNIT, so it
cannot return 20.
No, it should not be 20, mode sizes are multiple of the unit, you're
PSImode is 20 bits, fits in a 20 bit register, and uses 20 bit operations.
Then why do you need this change?
- TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (TYPE_MODE (type)));
+ TYPE_SIZE (type) = bitsize_int (GET_MODE_PRECISION (TYPE_MODE
(type))); TYPE_SIZE_UNIT (type) =
PSImode is 20 bits, fits in a 20 bit register, and uses 20 bit operations.
Then why do you need this change?
Because parts of the gcc code use the byte size instead of the bit
size, or round up, or assume powers-of-two sizes.
- TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE
Except gcc now knows the size of partial int modes. In this case,
PSImode is 20 bits and TYPE_SIZE is 20 bits, so they match.
I don't understand. The problematic change is
- TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (TYPE_MODE (type)));
+ TYPE_SIZE (type) = bitsize_int
which means that the precision of the mode is used to set the size
of the type, which very likely means that the size of the mode is
larger. So the size of the mode will be larger than the size of the
type, which is a lie.
For partial int modes, the precision and size are the same, and
I'm very skeptical... In any case, having a type whose TYPE_SIZE is smaller
than the size of its MODE is a lie which will bite you back at some point.
Except gcc now knows the size of partial int modes. In this case,
PSImode is 20 bits and TYPE_SIZE is 20 bits, so they match.
The code was
Setting TYPE_PRECISION is mostly useless, because most of gcc assumes
it's the same as TYPE_SIZE and ignores it.
Then you need to change that and not TYPE_SIZE.
It seems to work just fine in testing, and I'm trying to make it
non-fundamental.
I'm very skeptical... In any case, having a
I don't doubt it, because I've been fighting these assumptions for years.
The fight needs to be resumed/sped up, that's clearly the right thing to do.
Then please provide a very good idea for how to teach gcc about true
20-bit types in a system with 8-bit memory and 16-bit words.
Yes. That's exactly the problem I'm trying to solve here. I'm making
partial int modes have real corresponding types, and they can be any
bit size, with target PS*modes to match. The MSP430, for example, has
20-bit modes, 20-bit operands, and __int20. Rounding up to byte sizes
forces
And the hardware really loads 20 bits and not 24 bits? If so, I
think you might want to consider changing the unit to 4 bits instead
of 8 bits. If no, the mode is padded and has 24-bit size so why is
setting TYPE_PRECISION to 20 not sufficient to achieve what you
want?
The hardware
On 07/03/2014 06:12 PM, DJ Delorie wrote:
The hardware transfers data in and out of byte-oriented memory in
TYPE_SIZE_UNITS chunks. Once in a hardware register, all operations
are either 8, 16, or 20 bits (TYPE_SIZE) in size. So yes, values are
padded in memory, but no, they are not padded in
That's what'll need fixing then.
Can I change TYPE_SIZE to TYPE_SIZE_WITH_PADDING then? Because it's
not reflecting the type's size any more. Why do we have to round up a
type's size anyway? That's a pointless assumption *unless* you're
allocating memory space for it, and in that case, you
The whole point of using _PRECISION is to have the size be exactly the
same as the mode (bitsize is bigger than the mode for partial-int
modes). TYPE_SIZE_UNIT should be its storage size, right? If the
type is not a multiple of BITS_PER_UNIT, the actual size and
stored-in-memory size are
Do you have modes whose size is not multiple of the unit?
Yes. That's exactly the problem I'm trying to solve here. I'm making
partial int modes have real corresponding types, and they can be any
bit size, with target PS*modes to match. The MSP430, for example, has
20-bit modes, 20-bit
If you find a particular use of TYPE_SIZE is using a size that isn't
correct for your type whose precision is not a multiple of
BITS_PER_UNIT, then in my model the correct fix is to change that
use of TYPE_SIZE rather than to change the value of TYPE_SIZE for
that type - and such a change
gcc/
* cppbuiltin.c (define_builtin_macros_for_type_sizes): Round
pointer size up to a power of two.
* defaults.h (DWARF2_ADDR_SIZE): Round up.
(POINTER_SIZE_UNITS): New, rounded up value.
* dwarf2asm.c (size_of_encoded_value): Use it.
No stor-layout.c listed here but...
I knew I'd miss at least one in the split-up...
Index: gcc/stor-layout.c
===
--- gcc/stor-layout.c (revision 211858)
+++ gcc/stor-layout.c (working copy)
@@ -2123,13
On Fri, 27 Jun 2014, DJ Delorie wrote:
If you still disagree, let's first figure out what the right
relationship between TYPE_SIZE and TYPE_SIZE_UNIT is, for types that
aren't a multiple of BITS_PER_UNIT.
My suggestion: TYPE_SIZE should always be TYPE_SIZE_UNIT times
BITS_PER_UNIT, so
Are you proposing we remove TYPE_SIZE completely?
On Fri, 27 Jun 2014, DJ Delorie wrote:
Are you proposing we remove TYPE_SIZE completely?
Yes; I think that makes sense, unless someone produces a clearer
definition of what TYPE_SIZE means that isn't redundant.
If you find a particular use of TYPE_SIZE is using a size that isn't
correct for
Yes; I think that makes sense, unless someone produces a clearer
definition of what TYPE_SIZE means that isn't redundant.
Does TYPE_SIZE have a different meaning than TYPE_PRECISION
for non-integer types? Floats, vectors, complex?
Part 1 of 4, split from the full patch. The purpose of this set of
changes is to remove assumptions in GCC about type sizes. Previous to
this patch, GCC assumed that all types were powers-of-two in size, and
used naive math accordingly.
Old:
POINTER_SIZE / BITS_PER_UNIT
22 matches
Mail list logo