Aaah now I get it. Thank you.
I should have written the third line with dec 16384 = hex 4000 also and
I would have seen it.
I guess a compiler that does not complain about about an integer
overflow leads to quite shitty code. Or did he ignore compiler warnings?
I am using Borland, this is what happens with it:
* having 0x14000 without UL at the end, warns about too big for an int
* sending 0x14000UL to a function that takes "unsigned int" complains
about that uint is too small for ulong.
So I assume the size_t thing in *alloc is totally correct, the user had
problems at a lower, language related, level...
On 09/19/2018 10:51 AM, Mateusz Viste wrote:
On Wed, 19 Sep 2018 10:44:14 +0200, stecdose wrote:
HOW does a 16bit truncate limit to 16384bytes?
b1 b0 (byte 1, byte 0)
b2 | | (byte 2)
| | |
decimal 81920 = hex 014000 decimal 65536 = hex FFFF size of size_t = 2
bytes
16k is far less than 64k, why does it truncate to 16k? why not 64k if
greater than 64k?
TK Chia tried to allocate 81920 bytes (0x14000), while malloc accepts an
unsigned short (16-bits) only. Hence only the lowest 16 bits of 0x14000
were actually fed to malloc(), ie. 0x4000, that is 16K.
Mateusz
_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel