Hello

I'm not sure, but there is a parameter in hadoop-site.xml conf file that
could be a solution to your probleme:


<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>

you can find the explication for this parameter in the nutchhadoop tutorial:
The dfs.replication property states how many servers a single file should be
replicated to before it becomes available. Because we are using only a
single server for right now we have this at 1. If you set this value higher
than the number of data nodes that you have available then you will start
seeing alot of (Zero targets found, forbidden1.size=1) type errors in the
logs. We will increase this value as we add more nodes.
-- 
View this message in context: 
http://www.nabble.com/Nutch-0.9-dev-trunk-generate-task-failing-not-completing-tf3158347.html#a8767104
Sent from the Nutch - User mailing list archive at Nabble.com.

Reply via email to