[ 
https://issues.apache.org/jira/browse/PHOENIX-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994603#comment-13994603
 ] 

Gabriel Reid commented on PHOENIX-975:
--------------------------------------

This is happening because CDH uses Hadoop 2 APIs for MapReduce, and the build 
of Phoenix that you're using is compiled against Hadoop 1 (the two APIs are 
compatible, but require recompilation). 

In order to get the bulk load running on CDH, you'll need to build a Hadoop 2 
version. You can do this by supplying "-Dhadoop.profile=2" to maven when 
building the project. 

> Can't make bulk loading work
> ----------------------------
>
>                 Key: PHOENIX-975
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-975
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 3.0.0
>         Environment: Cloudera 4.8.0
>            Reporter: Cristian Armaselu
>
> Copied server jar on all 6 nodes:
> -rw-r--r-- 1 root root 1949565 May  9 19:43 
> /opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-core-3.0.0-incubating.jar
> Ran the below command to import an hdfs file in a pre-created table:
> hadoop --config /etc/hadoop/conf/ jar phoenix-core-3.0.0-incubating.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -libjars antlr-runtime-3.4.jar 
> --table CUSTOMERS3 --input /tmp/customers.dat
> Got a log of exceptions in the tasks such as:
> 2014-05-11 15:44:40,795 FATAL org.apache.hadoop.mapred.Child: Error running 
> child : java.lang.IncompatibleClassChangeError: Found interface 
> org.apache.hadoop.mapreduce.Counter, but class was expected
>       at 
> org.apache.phoenix.mapreduce.CsvToKeyValueMapper$MapperUpsertListener.upsertDone(CsvToKeyValueMapper.java:261)
>       at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:148)
>       at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:128)
>       at 
> org.apache.phoenix.mapreduce.CsvToKeyValueMapper.map(CsvToKeyValueMapper.java:134)
>       at 
> org.apache.phoenix.mapreduce.CsvToKeyValueMapper.map(CsvToKeyValueMapper.java:65)
>       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
>       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
>       at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>       at org.apache.hadoop.mapred.Child.main(Child.java:262)
> 2014-05-11 16:05:04,639 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for 
> server null, unexpected error, closing socket connection and attempting 
> reconnect
> java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
>       at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
>       at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
> The MR jobs are killed after they timeout (in 600 sec)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to