Hi I believe the configuration isn't rightly configured . You can try the default approach as mentioned at http://phoenix.apache.org/bulk_dataload.html
Ex: hadoop jar phoenix-<version>-client.jar org.apache.phoenix.mapreduce.index.IndexTool --schema MY_SCHEMA --data-table MY_TABLE --index-table ASYNC_IDX --output-path ASYNC_IDX_HFILES HTH. On Thu, Apr 21, 2016 at 7:27 PM, 金砖 <[email protected]> wrote: > Async Index job http://phoenix.apache.org/secondary_indexing.html: > > ${HBASE_HOME}/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool > --schema MY_SCHEMA --data-table MY_TABLE --index-table ASYNC_IDX > --output-path ASYNC_IDX_HFILES > > > How to submit that job to a yarn cluster ? > > On a single Node with huge data, process will be killed in reduce stage. >
