On 8/15/23 05:37, MegaIng via Gcc wrote:
One of the big challenges I am facing is that for our backend we sometimes support 16bit as the max size a CPU supports, including for pointers, and sometimes 32bit or 64bit. Am I seeing it correctly that POINTER_SIZE has to be a compile time constant and can therefore not easily be changed by the backend during compilation based on command line arguments?
We've certainly got targets which change POINTER_SIZE based on flags. It just needs to be a C expession. So you might see something like

#define POINTER_SIZE (TARGET_SOMEFLAG ? 16 : 32)



Also, on another backend I saw comments relating to libgcc (or newlib?) not working that well on systems where int is 16bit. Is that still true, and what is the best workaround?
GCC has supported 16 bit targets for eons, there are still some in the tree. libgcc and newlib also support 16 bit targets.

It can be difficult to support something like double precision floating point on a 16 bit target. So some 16 bit ports only support single precision floating point.


And a bit more concrete with something I am having a hard time debugging. I am getting errors `invalid_void`, seemingly triggered by an absent of `gen_return` when compiling with anything higher than -O0. How do I correctly provide an implementation for that? Or disable it's usage? Our epilogue is non-trivial, and it doesn't look like the other backends use something like `(define_insn "return" ...)`.

Many ports define trivial returns. But it's much more common to need a prologue and epilogue. Those are typically define_expands which generate all the code to set up a stack frame and tear a stack frame down+return.


Jeff

Reply via email to