On Sat, Mar 24, 2012 at 16:13, Thomas Koenig <tkoe...@netcologne.de> wrote:
> Hello world,
> this patch uses division by known sizes (which can usually be replaced
> by a simple shift because intrinsics have sizes of power of two) instead
> of division by the size extracted from the array descriptor itself.
> This should save about 20 cycles for a single calculation.
> I'll go through the rest of the library to identify other possibilities
> for this.
> Regression-tested, no new failures.
> OK for the branch?
--- libgfortran.h (Revision 185261)
+++ libgfortran.h (Arbeitskopie)
@@ -364,6 +364,11 @@
#define GFC_DESCRIPTOR_TYPE(desc) (((desc)->dtype & GFC_DTYPE_TYPE_MASK) \
#define GFC_DESCRIPTOR_SIZE(desc) ((desc)->dtype >> GFC_DTYPE_SIZE_SHIFT)
+/* This is for getting the size of a descriptor when the type of the
+ descriptor is known at compile-time. Do not use for string types. */
+#define GFC_DESCRIPTOR_SIZE_TYPEKNOWN(desc) (sizeof((desc)->base_addr))
#define GFC_DESCRIPTOR_DATA(desc) ((desc)->base_addr)
#define GFC_DESCRIPTOR_DTYPE(desc) ((desc)->dtype)
The comment should say "size of a type" rather than "size of a descriptor".
Though I think that longer term, maybe we should do the address
calculations in bytes directly? Now we have (on the branch) the stride
in bytes, which we convert to elements by dividing with the element
size, and then the compiler again converts it to bytes by multiplying
with the element size when we calculate the address of an array
element. So maybe instead we should have the address of the current
element as a variable of type char*, and then only when operating on
that element we cast it to the correct type?