Today, one hour, 7 minutes, 15 seconds ago, Christian Grothoff wrote:
> You are right in that the code does not _guarantee_ that the CPU load 
> limitations will be satisfied.  The assumption is, that if you cut out 
> (enough of) the expensive computations, you will be blocking (on 
> IO/networking events).  Now, clearly that may not happen if there is "too 
> much" network traffic (so that even with the cheapest algorithms, you cannot 
> process all of the data).  However, so far I had generally the impression 
> that this only happened on systems that were imbalanced in terms of CPU vs. 
> network capabilities (T1 line on a 386, to given an extreme example).  

Beside this, this may happen if gnunetd does only/mostly non-blocking
calls to select, read, and the likes (or calls with too little
timeouts).  Is it the case?

> The alternative would be to drop packets (or connections) if the CPU load 
> gets 
> too high, but so far I felt that this would likely be a bad idea, especially 
> since the CPU load may just surge due to unrelated activities (we measure on 
> most systems the _total_ CPU load, not just the share taken by gnunetd), 
> which would then lead to an unwarranted exodus in terms of 
> connections/bandwidth utilization.  In other words, my feeling is that the 
> CPU load measurement itself is not reliable enough to really justify "strong" 
> measurements (like dropping packets) to keep it in check.  Weaker 
> measurements (cheaper algorithms, slower processing of HELOs) OTOH can be 
> safe.  That's why the code currently works the way it does.

Agreed.

Thanks,
Ludovic.


_______________________________________________
Help-gnunet mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/help-gnunet

Reply via email to