Found and for the record.

Don't forget to increase you nbr of files per user  or hadoop won't work. On
ubuntu you need to set ulimit much higher than 1024 default.

2011/6/6 MilleBii <[email protected]>

> Need help for my production system.
>
> I get the following errors when doing a merge :
>
> Exception in thread "Lucene Merge Thread #0"
>> org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException:
>> Could not obtain block: blk_-2730923764012764374_55331
>> file=/user/hadoop/crawl/indexed-segments/20110404033101/part-00000/_12i.prx
>> at
>> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:309)
>>     at
>> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:286)
>>
>
>
> The file is quite fine, I tried to remove the bogus segment but then it
> fails on an other one. I can read the segment with Luke.
> It used to work fine before. The only difference is that now I have a real
> cluster, whereas before it use to be a pseudo-distributed.
>
> Am I doing something wrong ?
>
>
>
> --
> -MilleBii-
>



-- 
-MilleBii-

Reply via email to