On Tue, 22 Aug 2000 17:42:18 BST,  Theodore Hong writes:

> It feels like a bad design choice to arbitrarily set a fixed upper
> limit to the size of data.  ("No one will ever need more than 640K of
> memory...")  Why do we need one?

As Oskar stated in his original message on the subject, fields and
messages are currently read directly into memory.  This means you can
probably crash a Freenet node just by sending it several megs of data
without a newline.  Yes, you can transfer to disk, but this introduces
complex code that has not been written yet, and will be even harder to
test properly.  You still have a limit to test (disk size), but now the
limit changes with most every test run, and few people run into it (when
they aren't getting hit by a buffer overrun attack).

Personally, I've slowly come to prefer well-defined limits as opposed 
to vague assurances that "technically, there is no limit".  *Something* 
will eventually limit things anyway, often in some hard-to-replicate
realm like available swap space or some integer used by an alternate
transmission system to count packets.  I'd rather deal with upgrading 
hard limits than with trying to replicate an overrun attack bug.

> If you don't have enough temporary space to store a reply, can't we
> fall back to streaming it directly from the in-socket to the
> out-socket (the way we did in the beginning) instead of writing it 
> to disk?

As Oskar stated in his original message on the subject, that is
possible, but complex.  Time and again I've emerged from a feeping
creatures nightmare project, wanting to brand the phrase "stabilize
basic code before adding new features" on my fingers.  Add it later.


--Will
(not speaking for his employers)
willdye at freedom.net




_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to