thanks. I have identified the infinite loop. It is in

org.apache.mahout.cf.taste.hadoop.item.UserVectorToCooccurrenceMapper.maybePruneUserVector(UserVectorToCooccurrenceMapper.java:88)

where the resultingSizeAtCutoff variable remains zero, it does not increase.

Tamas

On 04/05/2010 15:35, Vimal Mathew wrote:
"kill -QUIT" will cause the stack trace to be dumped to stderr (which
is usually a log file). You can also try

jstack [java process ID]

to read the stack trace directly.

You can use  the "jps" command to list Java processes running on a system.



On Mon, May 3, 2010 at 7:26 PM, Sean Owen<sro...@gmail.com>  wrote:
I think the infinite loop theory is good.

As a crude way to debug, you can log on to a worker machine, locate
the java process that may be stuck, and:

kill -QUIT [java process ID]

This just makes it dump its stack for each thread. Do that a few times
and you may easily spot an infinite loop situation because it will
just be in the same place over and over.

http://java.sun.com/developer/technicalArticles/Programming/Stacktrace/

On Tue, May 4, 2010 at 12:15 AM, Tamas Jambor<jambo...@googlemail.com>  wrote:
It should be OK, because the hosts are in a local network, properly set up
by the IT support.

I guess the conf files should be OK too, because it runs the first two jobs
without a problem only fails with the third. and it runs other hadoop
examples.

I will look into how to debug a hadoop project, maybe I can trace down the
problem that way.

Reply via email to