Thank you very much for your instant response :-)

Hope Amazon Web Services will help me with this one.
IP


On 07/24/2012 02:06 AM, Jean-Daniel Cryans wrote:
... INFO mapred.JobClient: Task Id : attempt_201207232344_0001_m_000000_0,
Status : FAILED
java.lang.IllegalArgumentException: *Can't read partitions file*
     at
org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
...

I followed this link, while googling for the solution :
http://hbase.apache.org/book/trouble.mapreduce.html
and it implies a misconfiguration concerning a fully distributed
environment.

I would like, therefore, to ask if it is even possible to bulk import data
in a pseudo-distributed mode and if this is the case, does anyone have a
guess about this error?
AFAIK you just can't use the local job tracker for this, so you do
need to start one.

J-D

Reply via email to