Hello

I'm not sure, but there is a parameter in hadoop-site.xml conf file that
could be a solution to your probleme:


<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>

you can find the explication for this parameter in the nutchhadoop tutorial:
The dfs.replication property states how many servers a single file should be
replicated to before it becomes available. Because we are using only a
single server for right now we have this at 1. If you set this value higher
than the number of data nodes that you have available then you will start
seeing alot of (Zero targets found, forbidden1.size=1) type errors in the
logs. We will increase this value as we add more nodes.
-- 
View this message in context: 
http://www.nabble.com/Nutch-0.9-dev-trunk-generate-task-failing-not-completing-tf3158347.html#a8767104
Sent from the Nutch - User mailing list archive at Nabble.com.


-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to