Des,
Is speculative execution turned on in your config ? Since your reducer has
side effects (both codes), it should be turned off.
Put the following in hadoop-site.xml:
<property>
<name>mapred.speculative.execution</name>
<value>false</value>
<description>If true, then multiple instances of some map and reduce tasks
may be executed in parallel.</description>
</property>
- Milind
On 7/27/07 4:36 AM, "DES" <[EMAIL PROTECTED]> wrote:
> hello,
>
> I tried nutch with hadoop nightly builds (in hudson #135 and newer) and got
> following problem:
>
>
> java.io.IOException: Lock obtain timed out:
>
[EMAIL
PROTECTED]://xxx.xxx.xxx.xxx:9000/user/nutch/crawl/indexes/part-00020/write.loc>
k
> at org.apache.lucene.store.Lock.obtain(Lock.java:69)
> at org.apache.lucene.index.IndexReader.aquireWriteLock(IndexReader.java:526)
>
> at org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:551)
> at org.apache.nutch.indexer.DeleteDuplicates.reduce(DeleteDuplicates.java:451)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java
> :323)
> at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1763)
>
>
> I think the reason could be the lucene locks.
> I just tried following code and got exactly the same error:
>
> String indexPath="crawl/index";
> Path index=new Path(indexPath);
> Configuration conf = NutchConfiguration.create();
> JobConf job = new NutchJob(conf);
> FileSystem fs = FileSystem.get(job);
> FsDirectory dir=new FsDirectory(fs, index, false, conf);
> IndexReader reader = IndexReader.open(dir);
> reader.deleteDocument(0);
>
> can somebody tell me if there is a solution for that? or should I just drop
> back to older hadoop version? (e.g. 0.12.x)
>
> thanks
>
> des
--
Milind Bhandarkar
408-349-2136
([EMAIL PROTECTED])