On Mon, Mar 4, 2013 at 3:19 PM, Laszlo Ersek <[email protected]> wrote: > On 03/04/13 23:15, Jordan Justen wrote: >> On Mon, Mar 4, 2013 at 11:46 AM, Laszlo Ersek <[email protected]> wrote: > >>> Also (independently), 0x3c00 is an "int" (INT32), and bitwise complement >>> on signed integers is not the most beautiful thing in the universe. What >>> happens now is: >>> >>> - ~0x3c00 has type "signed int", >>> - representation is bit pattern 11111111 11111111 11000011 11111111, >>> - value is -15361 decimal (two's complement on our platform -- >>> implementation-defined), >>> - when casting it to "short unsigned" (UINT16), we get >>> USHRT_MAX + 1 + (-15361) == 65536 - 15361 == 0xC3FF >>> >>> I think ((UINT16) ~0x3c00u) would be cleaner (without the detour into >>> negative), but that's just an idea. >> >> Hmm, I was wondering what ~(BIT13|BIT12|BIT11|BIT10) might do, but >> then I saw that BIT31 and lower in Base.h don't use the 'u' suffix. > > BIT31 (0x80000000) has type "unsigned int" already, because it is a > hexadecimal constant with no suffix, and the value doesn't fit in "int" > (C99 6.4.4.1p5). But BIT30 and below are "int"s. > >> Do you think that we should update Base.h? > > I wouldn't recommend that. Consider an expression (anywhere in the edk2 > tree) that looks like > > expr1 op BITn > > where > - expr1 has type "int", or is promoted to "int" (C99 6.3.1.1p2), > - op is an operator that invokes the usual arithmetic conversions > (C99 6.3.1.8), > - n <= 30. > > In this case the *result* type of the above expression is "int" now. If > we turn BITn (n<=30) into "unsigned", then expr1, after any promotion to > "int", will be converted to "unsigned int", as part of the usual > arithmetic conversions. Consequently, unless the given operator > specifies otherwise, the result type of the entire expression above > becomes "unsigned int". > > The conversion of "expr1" itself could be worrisome, but I'm even more > concerned about the changed result type of the entire expression. For > example, in > > if ((signed_int_1 & BIT5) >= signed_int_2) { > // ... > } > else { > // ... > } > > both signed int's would be converted to "unsigned" after the change. > Suppose that signed_int_1 is OK (eg. always in [0..4095]), but what if > the programmer piggy-backed the "signed_int_2 == -1" case as "always > true" (eg. some option is disabled and the "else" branch doesn't make > sense). The conversion would invert this logic (the RHS would evaluate > to UINT_MAX). > > I think the current macros are suboptimal, but without reviewing each > use I wouldn't dare touch them. (And of course there are the proprietary > trees we can't even grep.)
But, for the same reason you are concerned with: (UINT16) ~0x3c00 You'd be concerned with: (UINT16) ~(BIT13|BIT12|BIT11|BIT10) But, it seems to me that this exactly the use case for the BIT* macros. So, are they broken? -Jordan ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_feb _______________________________________________ edk2-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/edk2-devel
