I'll repeat:

GCC internally limits array sizes (in terms of bytes, not elements) to
> the maximum representable in a signed integer of the type specified by
> the back end as a SIZE_TYPE (even if that type is specified as an
> unsigned type, as it is in msp430).
>

It has done this since 1994, when it was intentionally changed to use a
signed type.  Though I could speculate too, I don't know why; the commit
comment didn't say.

The remainder of your comments on this specific topic are, IMO, subtly
incorrect (the argument based on "index" vice "offset"), subjective (that
support for a negative index is due to carelessness or stupidity), or
ignorant (that 20-bit support in gcc is specifically hard due to
expectations of 16/32-bit data size).  Let's leave it where the facts stop:
gcc won't let you declare the array that large.

Peter

On Wed, Apr 20, 2011 at 6:17 AM, JMGross <msp...@grossibaer.de> wrote:
> ----- Ursprüngliche Nachricht -----
> Von: Matthias Ringwald
> Gesendet am: 20 Apr 2011 09:30:49
>
>> I'll try later if the array bound with my > 32 KB array comes from the
address offset (as suggested be Peter), or by array index (as explained by
JM) :)
>
> Both.
> The index cannot be larger than 32766, as there cannot be more than 32767
elements. The compiler complains for 32768 elements even if this could be
perfectly accessed with a 16bit signed address offset. So this is
> an array index/element counter limitation.
> However, if your array is int size, the element limit is 16383 elements,
which is a maximum index of 16382, far less than the maximum, but then the
offset grows too large for a signed 16 bit offset. Same cause: signed
> instead of unsigned variables used.
>
> I really wonder why there was this limitation at all. There is no point in
a negative index, except the programmer is desperately juggling with
pointers. I guess that's another point where a (rather stupid or at least
> careless) implementation has grown into a standard that is kept over the
years for nothing but for keeping the 'standard'.
> But I may be wrong and missed an important point.
>
> 20bit pointer support in the compiler could fix that, but it's difficult
to implement 20bit pointers while keeping compatibility to the standard C
with 16/32bit data size.
>
> I wish, TI had implemented real 32bit registers (even if PC and SP would
be truncated or the upper 12 bits ignored for address usage).
> It would make things so much easier to support on compiler level and be
compliant to the C++ standard.
> (and hey, loading a 32 bit register with a 32bit immediate value would
require 2 instructions - one step closer to the usual RISC implementation :)
)
>
> JMGross
>
>
------------------------------------------------------------------------------
> Benefiting from Server Virtualization: Beyond Initial Workload
> Consolidation -- Increasing the use of server virtualization is a top
> priority.Virtualization can reduce costs, simplify management, and improve
> application availability and disaster protection. Learn more about
boosting
> the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
> _______________________________________________
> Mspgcc-users mailing list
> Mspgcc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/mspgcc-users
>
------------------------------------------------------------------------------
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
_______________________________________________
Mspgcc-users mailing list
Mspgcc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mspgcc-users

Reply via email to