Probably because it was late and I was thinking "shutup gcc, I want to see
if this hack works!" instead of "thank you gcc, you're right, that was silly
of me."  Feel free to check in that correction.

Also, you weren't around in #netsurf when I said it, but part of this code
really is a bit hack-ish.  The use of the GIF_END_OF_FRAME as a sort of
"magic error" isn't really something I wanted to do.  Ideally during
gif_initialise_frame, it could recover by signaling the end of the
gif--which it tries to do--and then during gif_decode_frame and the LZW
decoding everything would work just fine.  But I'm not certain how the LZW
decoding works, so I'd have to read over the documentation to determine the
correct way to "fool" the decoding process.  Then again, if falsifying the
data properly requires resizing the buffer then maybe this code I whipped up
is better left as is.


Sean.

On Sat, Jan 3, 2009 at 7:47 AM, John Tytgat <[email protected]>wrote:

> In message <[email protected]> you wrote:
>
> > Author: dynis
> > Date: Sat Jan  3 01:01:10 2009
> > New Revision: 5957
> >
> > [...]
> > +             /*      Check if the frame data runs off the end of the
> file
> > +             */
> > +             if ((int)(gif_bytes - block_size) < 0) {
>
> Understanding the difference between signed vs unsigned overflow in C is
> not always intuitive and obvious so I'm a bit puzzled why we wouldn't go
> for the more obvious:
>
>        if (gif_bytes < (unsigned int)block_size) {
>
> with gif_bytes being unsigned and block_size signed.
>
> John.
> --
> John Tytgat
> [email protected]
>
>

Reply via email to