[freenet-support] Two bugs reported on FMS

2009-01-28 Thread Matthew Toseland
On Wednesday 28 January 2009 07:24, 3BUIb3S50i 3BUIb3S50i wrote:
> Don Sato at tGZmfuEbnEqArnhpyqj4of3-s21B0uTliyfALlQ0bw8 wrote:
> > One of the recent Freenet builds caused my Freenet node end up with
> > almost constant 100% CPU usage. This trouble began somewhere inbetween
> > 1199-1203 releases. My node used to work very nicely before that. Is
> > anyone else experiencing this problem?
> >
> > I can post info Stats page and Freenet logs to help in diagnosing the
> > problem if needed. Just let me know which parts in particular you're
> > interested in. Regarding wrapper.log, I found nothing of use in there,
> > just the node (re)start/stop related activity.

Please determine whether this is a memory-related problem. Add the following 
to your wrapper.conf, and then view the file freenet.loggc:

wrapper.java.additional.3=-Xloggc:freenet.loggc

If it is showing Full GC's very frequently (once a second or more), then there 
is a memory problem. If not, the problem is something else...

A stack dump might also be interesting. If you are doing a big download or 
upload, there will inevitably be periods when Freenet uses lots of CPU for 
FEC encoding/decoding, or for compression/decompression; but this is 
generally run at a low priority.
> >
> > I even tried complete reinstall of the node with new datastore, but to no
> > avail. Datastore type is salt-hash.

Is the download queue empty?
> >
> > My specs:
> > JVM Info
> >
> > * Used Java memory: 159 MiB
> > * Allocated Java memory: 244 MiB
> > * Maximum Java memory: 508 MiB
> > * Running threads: 280/500

May not be a memory problem then ... Please do the above steps anyway. If it 
isn't a memory problem, a stack dump might be useful (note that it may have 
sensitive information in it sometimes...)

> > * Available CPUs: 1
> > * Java Version: 1.6.0_10
> > * JVM Vendor: Sun Microsystems Inc.
> > * JVM Version: 11.0-b15
> > * OS Name: Linux
> > * OS Version: 2.6.27-9-generic
> > * OS Architecture: i386
> 
> bubba at Uv~laZHgpQPILgjUTc2bD~mMDm5r1PJKYsS3kZNUixM wrote:
> > Tommy[D]@EefdujDZxdWxl0qusX0cJofGmJBvd3dF4Ty61PZy8Y8 wrote:
> >
> >> myidentity at 1QowK8lzyEYNUsI0yGamWcd6ox80XQkKr8kCS6PmJ5Q schrieb:
> >>> Has happened a couple times in the last couple days.  All through put 
simply
> >>> stops with my node.  Once I simply waited a long time and it started to 
work
> >>> again but the speed was very, very slow so I restarted the node.
> >>>
> >>> One time confirmed for sure it died for 6 hours till I got back to my 
computer
> >>> and restarted the node.
> >>>
> >>> After restarting the node it seems to work fine again till it dies.  
This is not
> >>> common maybe it has happened 3-4 times since the last update.
> >>>
> >>> I run a fast computer with lots of memory.  1 gig given to freenet.
> >>>
> >>> I will continue to monitor this and report back if it continues.
> >>
> >> Would be nice to have some data out of wrapper.log or logs/*, else it is 
hard to find the problem
> >> and fix it. Also other things like node stats can be interesting.
> >
> > same here, i think it's a memory leak

That is easily confirmed, see the procedure mentioned above.
-- next part --
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: 



[freenet-support] Two bugs reported on FMS

2009-01-28 Thread 3BUIb3S50i 3BUIb3S50i
Don Sato at tGZmfuEbnEqArnhpyqj4of3-s21B0uTliyfALlQ0bw8 wrote:
> One of the recent Freenet builds caused my Freenet node end up with
> almost constant 100% CPU usage. This trouble began somewhere inbetween
> 1199-1203 releases. My node used to work very nicely before that. Is
> anyone else experiencing this problem?
>
> I can post info Stats page and Freenet logs to help in diagnosing the
> problem if needed. Just let me know which parts in particular you're
> interested in. Regarding wrapper.log, I found nothing of use in there,
> just the node (re)start/stop related activity.
>
> I even tried complete reinstall of the node with new datastore, but to no
> avail. Datastore type is salt-hash.
>
> My specs:
> JVM Info
>
> * Used Java memory: 159 MiB
> * Allocated Java memory: 244 MiB
> * Maximum Java memory: 508 MiB
> * Running threads: 280/500
> * Available CPUs: 1
> * Java Version: 1.6.0_10
> * JVM Vendor: Sun Microsystems Inc.
> * JVM Version: 11.0-b15
> * OS Name: Linux
> * OS Version: 2.6.27-9-generic
> * OS Architecture: i386

bubba at Uv~laZHgpQPILgjUTc2bD~mMDm5r1PJKYsS3kZNUixM wrote:
> Tommy[D]@EefdujDZxdWxl0qusX0cJofGmJBvd3dF4Ty61PZy8Y8 wrote:
>
>> myidentity at 1QowK8lzyEYNUsI0yGamWcd6ox80XQkKr8kCS6PmJ5Q schrieb:
>>> Has happened a couple times in the last couple days.  All through put simply
>>> stops with my node.  Once I simply waited a long time and it started to work
>>> again but the speed was very, very slow so I restarted the node.
>>>
>>> One time confirmed for sure it died for 6 hours till I got back to my 
>>> computer
>>> and restarted the node.
>>>
>>> After restarting the node it seems to work fine again till it dies.  This 
>>> is not
>>> common maybe it has happened 3-4 times since the last update.
>>>
>>> I run a fast computer with lots of memory.  1 gig given to freenet.
>>>
>>> I will continue to monitor this and report back if it continues.
>>
>> Would be nice to have some data out of wrapper.log or logs/*, else it is 
>> hard to find the problem
>> and fix it. Also other things like node stats can be interesting.
>
> same here, i think it's a memory leak


-- 
3buib3s50i at gmail.com | dimonqmfcb at gmx.com



Re: [freenet-support] Two bugs reported on FMS

2009-01-28 Thread Matthew Toseland
On Wednesday 28 January 2009 07:24, 3BUIb3S50i 3BUIb3S50i wrote:
 Don s...@tgzmfuebneqarnhpyqj4of3-s21b0utliyfallq0bw8 wrote:
  One of the recent Freenet builds caused my Freenet node end up with
  almost constant 100% CPU usage. This trouble began somewhere inbetween
  1199-1203 releases. My node used to work very nicely before that. Is
  anyone else experiencing this problem?
 
  I can post info Stats page and Freenet logs to help in diagnosing the
  problem if needed. Just let me know which parts in particular you're
  interested in. Regarding wrapper.log, I found nothing of use in there,
  just the node (re)start/stop related activity.

Please determine whether this is a memory-related problem. Add the following 
to your wrapper.conf, and then view the file freenet.loggc:

wrapper.java.additional.3=-Xloggc:freenet.loggc

If it is showing Full GC's very frequently (once a second or more), then there 
is a memory problem. If not, the problem is something else...

A stack dump might also be interesting. If you are doing a big download or 
upload, there will inevitably be periods when Freenet uses lots of CPU for 
FEC encoding/decoding, or for compression/decompression; but this is 
generally run at a low priority.
 
  I even tried complete reinstall of the node with new datastore, but to no
  avail. Datastore type is salt-hash.

Is the download queue empty?
 
  My specs:
  JVM Info
 
  * Used Java memory: 159 MiB
  * Allocated Java memory: 244 MiB
  * Maximum Java memory: 508 MiB
  * Running threads: 280/500

May not be a memory problem then ... Please do the above steps anyway. If it 
isn't a memory problem, a stack dump might be useful (note that it may have 
sensitive information in it sometimes...)

  * Available CPUs: 1
  * Java Version: 1.6.0_10
  * JVM Vendor: Sun Microsystems Inc.
  * JVM Version: 11.0-b15
  * OS Name: Linux
  * OS Version: 2.6.27-9-generic
  * OS Architecture: i386
 
 bu...@uv~lazhgpqpilgjutc2bd~mmdm5r1pjkyss3kznuixm wrote:
  tommy...@eefdujdzxdwxl0qusx0cjofgmjbvd3df4ty61pzy8y8 wrote:
 
  myident...@1qowk8lzyeynusi0ygamwcd6ox80xqkkr8kcs6pmj5q schrieb:
  Has happened a couple times in the last couple days.  All through put 
simply
  stops with my node.  Once I simply waited a long time and it started to 
work
  again but the speed was very, very slow so I restarted the node.
 
  One time confirmed for sure it died for 6 hours till I got back to my 
computer
  and restarted the node.
 
  After restarting the node it seems to work fine again till it dies.  
This is not
  common maybe it has happened 3-4 times since the last update.
 
  I run a fast computer with lots of memory.  1 gig given to freenet.
 
  I will continue to monitor this and report back if it continues.
 
  Would be nice to have some data out of wrapper.log or logs/*, else it is 
hard to find the problem
  and fix it. Also other things like node stats can be interesting.
 
  same here, i think it's a memory leak

That is easily confirmed, see the procedure mentioned above.


pgpVSRXZA1hh0.pgp
Description: PGP signature
___
Support mailing list
Support@freenetproject.org
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:support-requ...@freenetproject.org?subject=unsubscribe

[freenet-support] Two bugs reported on FMS

2009-01-27 Thread 3BUIb3S50i 3BUIb3S50i
Don s...@tgzmfuebneqarnhpyqj4of3-s21b0utliyfallq0bw8 wrote:
 One of the recent Freenet builds caused my Freenet node end up with
 almost constant 100% CPU usage. This trouble began somewhere inbetween
 1199-1203 releases. My node used to work very nicely before that. Is
 anyone else experiencing this problem?

 I can post info Stats page and Freenet logs to help in diagnosing the
 problem if needed. Just let me know which parts in particular you're
 interested in. Regarding wrapper.log, I found nothing of use in there,
 just the node (re)start/stop related activity.

 I even tried complete reinstall of the node with new datastore, but to no
 avail. Datastore type is salt-hash.

 My specs:
 JVM Info

 * Used Java memory: 159 MiB
 * Allocated Java memory: 244 MiB
 * Maximum Java memory: 508 MiB
 * Running threads: 280/500
 * Available CPUs: 1
 * Java Version: 1.6.0_10
 * JVM Vendor: Sun Microsystems Inc.
 * JVM Version: 11.0-b15
 * OS Name: Linux
 * OS Version: 2.6.27-9-generic
 * OS Architecture: i386

bu...@uv~lazhgpqpilgjutc2bd~mmdm5r1pjkyss3kznuixm wrote:
 tommy...@eefdujdzxdwxl0qusx0cjofgmjbvd3df4ty61pzy8y8 wrote:

 myident...@1qowk8lzyeynusi0ygamwcd6ox80xqkkr8kcs6pmj5q schrieb:
 Has happened a couple times in the last couple days.  All through put simply
 stops with my node.  Once I simply waited a long time and it started to work
 again but the speed was very, very slow so I restarted the node.

 One time confirmed for sure it died for 6 hours till I got back to my 
 computer
 and restarted the node.

 After restarting the node it seems to work fine again till it dies.  This 
 is not
 common maybe it has happened 3-4 times since the last update.

 I run a fast computer with lots of memory.  1 gig given to freenet.

 I will continue to monitor this and report back if it continues.

 Would be nice to have some data out of wrapper.log or logs/*, else it is 
 hard to find the problem
 and fix it. Also other things like node stats can be interesting.

 same here, i think it's a memory leak


-- 
3buib3s...@gmail.com | dimonqm...@gmx.com
___
Support mailing list
Support@freenetproject.org
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:support-requ...@freenetproject.org?subject=unsubscribe