Why is it supposed to work? I do not follow.
#define INT_MIN 0x8000
This might be an unsigned integer constant according to standards.
When used, it may result in implicit type conversion to unsigned for
the rest of calculations and comparisons: the hell breaks loose. This
is a reason
Dear Brian,
Thanks! I will take a look, but please regard that I might spend
a month about this issue because I have to start from the preparation
of Visual Studio...
Regards,
mpsuzuki
Brian Sullender wrote:
Sure. Here is a dummy project i created for you to test and reproduce the
problem.
I'm compiling with VS C++ 11
When INT_MIN is defined as:
#define INT_MIN 0x8000
instead of:
#define INT_MIN (-2147483647 - 1)
FT_Render_Glyph returns Invalid_Pixel_Size
Not really a problem, but annoying. Thanks.
___
Freetype-devel mailing
Interesting. Could you post some test code to reproduce the issue?
Regards,
mpsuzuki
Brian Sullender wrote:
I'm compiling with VS C++ 11
When INT_MIN is defined as:
#define INT_MIN 0x8000
instead of:
#define INT_MIN (-2147483647 - 1)
FT_Render_Glyph returns