On Tuesday 10 November 2009 19:18:12 Robert Hailey wrote:
> 
> On Nov 9, 2009, at 7:35 AM, Matthew Toseland wrote:
> 
> > Some Chinese feedback: Main point is there are a lot of old PCs with  
> > very limited RAM, although new ones have plenty of RAM. Broadband is  
> > however reasonable in general, making it a better candidate than  
> > Iran. Suggested making startup on startup configurable,  
> > substantially reducing memory footprint etc. My worry with the  
> > former is that opennet won't last forever, and low-uptime darknet is  
> > very problematic.
> >
> > IMHO we could reasonably cut our default memory limit to 128M. The  
> > main issues are:
> > - Sorting out Library memory issues (bug #3685)
> > - Sorting out Freetalk memory issues (I haven't seen OOMs from  
> > Freetalk recently, maybe this really is fixed?)
> > - Sorting out - or ignoring - the startup memory spike on big  
> > inserts (4G needs 192MB).
> >
> > On fast systems, a higher memory limit means less CPU spent in  
> > garbage collection, so maybe there is an argument for reinstating  
> > the panel in the wizard that asks the user how much memory to  
> > allocate...
> >
> > Our friend has also localised the wininstaller (this is subject to  
> > technical issues Zero3 hopefully will be able to resolve), and jSite  
> > (I will deal with this soon).
> 
> Since I have freenet installed on an old machine, my observations...  
> It made the computer all but unusable! On a hunch, I cut it's  
> datastore size to a minimum and found that the impact was not  
> noticeable.
> 
> I'm not sure exactly what was going on, slow hard drive, vm/page  
> thrashing, ??? but maybe there are simpler variables (like store size)  
> which will both improve performance and reduce memory footprint.

So if the system memory is low we should limit the datastore size? That is 
logical...
> 
> It might be beneficial to run tests on memory consumption for various  
> setups (number of peers, concurrent requests, datastore size).

What was the size before and after? And the memory limit etc? AFAIK we use 
approx 1MB of RAM for every 2GB of store...
> 
> And on a side note, it might also be helpful to know how long a  
> request has to wait in line to clear the datastore/bloom filter check,  
> or place benchmarks in to make stats of all requests. Then we might be  
> able to tell the user "your node can't keep up with your requested   
> {datastore size, number of peers}"

Good idea to have that stat, and maybe a useralert based on it, although I'm 
skeptical that we can reliably diagnose the cause - maybe we can with knowing 
the current system memory total?

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to