Hi, please check this error:
Error: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z

BTW, what's the reason that using Kylin 1.5.3? It is more than 2 years old.

林荣任 <[email protected]> 于2018年10月8日周一 下午2:54写道:

> 集群环境:
>
> hadoop 2.6.4
> hive 1.2.1(apache-hive-2.3.3-bin)
> hbase-1.1.3-bin.tar
> zookeeper 3.4.5
> apache-kylin-1.5.3-HBase1.x-bin.tar
>
> 创建cube的时候,在Monitor模块观察,
>
> 执行完第一步:Count Source Table
> 执行第二步:Create Intermediate Flat Hive Table出现错误Error
> 日志提示:
>
> total input rows = 4541
> expected input rows per mapper = 1000000
> reducers for RedistributeFlatHiveTableStep = 1
> Create and distribute table, cmd:
> hive -e "SET dfs.replication=2;
> SET hive.exec.compress.output=true;
> SET hive.auto.convert.join.noconditionaltask=true;
> SET hive.auto.convert.join.noconditionaltask.size=100000000;
> SET
> mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET mapred.output.compression.type=BLOCK;
> SET mapreduce.job.split.metainfo.maxsize=-1;
>
> set mapreduce.job.reduces=1;
>
> set hive.merge.mapredfiles=false;
>
> USE default;
> DROP TABLE IF EXISTS
> kylin_intermediate_kylin_sales_cube_desc_20120101000000_20121201000000;
> CREATE EXTERNAL TABLE IF NOT EXISTS
> kylin_intermediate_kylin_sales_cube_desc_20120101000000_20121201000000
> (
> DEFAULT_KYLIN_SALES_PART_DT date
> ,DEFAULT_KYLIN_SALES_LEAF_CATEG_ID bigint
> ,DEFAULT_KYLIN_SALES_LSTG_SITE_ID int
> ,DEFAULT_KYLIN_CATEGORY_GROUPINGS_META_CATEG_NAME string
> ,DEFAULT_KYLIN_CATEGORY_GROUPINGS_CATEG_LVL2_NAME string
> ,DEFAULT_KYLIN_CATEGORY_GROUPINGS_CATEG_LVL3_NAME string
> ,DEFAULT_KYLIN_SALES_LSTG_FORMAT_NAME string
> ,DEFAULT_KYLIN_SALES_PRICE decimal(19,4)
> ,DEFAULT_KYLIN_SALES_SELLER_ID bigint
> )
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '\177'
> STORED AS SEQUENCEFILE
> LOCATION
> '/kylin/kylin_metadata/kylin-cc7811cc-3494-4db0-b24e-4a3d76d22186/kylin_intermediate_kylin_sales_cube_desc_20120101000000_20121201000000';
> SET dfs.replication=2;
> SET hive.exec.compress.output=true;
> SET hive.auto.convert.join.noconditionaltask=true;
> SET hive.auto.convert.join.noconditionaltask.size=100000000;
> SET
> mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
> SET mapred.output.compression.type=BLOCK;
> SET mapreduce.job.split.metainfo.maxsize=-1;
> INSERT OVERWRITE TABLE
> kylin_intermediate_kylin_sales_cube_desc_20120101000000_20121201000000
> SELECT
> KYLIN_SALES.PART_DT
> ,KYLIN_SALES.LEAF_CATEG_ID
> ,KYLIN_SALES.LSTG_SITE_ID
> ,KYLIN_CATEGORY_GROUPINGS.META_CATEG_NAME
> ,KYLIN_CATEGORY_GROUPINGS.CATEG_LVL2_NAME
> ,KYLIN_CATEGORY_GROUPINGS.CATEG_LVL3_NAME
> ,KYLIN_SALES.LSTG_FORMAT_NAME
> ,KYLIN_SALES.PRICE
> ,KYLIN_SALES.SELLER_ID
> FROM DEFAULT.KYLIN_SALES as KYLIN_SALES
> INNER JOIN DEFAULT.KYLIN_CAL_DT as KYLIN_CAL_DT
> ON KYLIN_SALES.PART_DT = KYLIN_CAL_DT.CAL_DT
> INNER JOIN DEFAULT.KYLIN_CATEGORY_GROUPINGS as KYLIN_CATEGORY_GROUPINGS
> ON KYLIN_SALES.LEAF_CATEG_ID = KYLIN_CATEGORY_GROUPINGS.LEAF_CATEG_ID AND
> KYLIN_SALES.LSTG_SITE_ID = KYLIN_CATEGORY_GROUPINGS.SITE_ID
> WHERE (KYLIN_SALES.PART_DT >= '2012-01-01' AND KYLIN_SALES.PART_DT <
> '2012-12-01')
>  DISTRIBUTE BY RAND();
>
> "
>
> Logging initialized using configuration in
> jar:file:/home/hadoop/apps/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
> OK
> Time taken: 0.884 seconds
> OK
> Time taken: 0.577 seconds
> OK
> Time taken: 0.815 seconds
> Query ID = hadoop_20181008182637_13291646-6c23-4431-8a8e-401ced7aa67a
> Total jobs = 1
> 18/10/08 18:26:54 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Execution log at:
> /tmp/hadoop/hadoop_20181008182637_13291646-6c23-4431-8a8e-401ced7aa67a.log
> 2018-10-08 18:26:57 Starting to launch local task to process map
> join; maximum memory = 518979584
> 2018-10-08 18:26:59 Dump the side-table for tag: 1 with group count: 144
> into file:
> file:/tmp/hadoop/bba84efe-adae-431c-a167-14cd272ebd80/hive_2018-10-08_18-26-37_130_1316609404572806670-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
> 2018-10-08 18:26:59 Uploaded 1 File to:
> file:/tmp/hadoop/bba84efe-adae-431c-a167-14cd272ebd80/hive_2018-10-08_18-26-37_130_1316609404572806670-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
> (10893 bytes)
> 2018-10-08 18:26:59 Dump the side-table for tag: 0 with group count: 334
> into file:
> file:/tmp/hadoop/bba84efe-adae-431c-a167-14cd272ebd80/hive_2018-10-08_18-26-37_130_1316609404572806670-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile10--.hashtable
> 2018-10-08 18:26:59 Uploaded 1 File to:
> file:/tmp/hadoop/bba84efe-adae-431c-a167-14cd272ebd80/hive_2018-10-08_18-26-37_130_1316609404572806670-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile10--.hashtable
> (123354 bytes)
> 2018-10-08 18:26:59 End of local task; Time Taken: 2.845 sec.
> Execution completed successfully
> MapredLocal task succeeded
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Defaulting to jobconf value of: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=<number>
> Starting Job = job_1538993274007_0002, Tracking URL =
> http://Master:8088/proxy/application_1538993274007_0002/
> <http://master:8088/proxy/application_1538993274007_0002/>
> Kill Command = /home/hadoop/apps/hadoop-2.6.4/bin/hadoop job  -kill
> job_1538993274007_0002
> Hadoop job information for Stage-3: number of mappers: 1; number of
> reducers: 1
> 2018-10-08 18:27:09,693 Stage-3 map = 0%,  reduce = 0%
> 2018-10-08 18:27:15,984 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU
> 1.7 sec
> 2018-10-08 18:27:38,925 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU
> 1.7 sec
> MapReduce Total cumulative CPU time: 1 seconds 700 msec
> Ended Job = job_1538993274007_0002 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1538993274007_0002_m_000000 (and more) from job
> job_1538993274007_0002
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_1538993274007_0002_r_000000
>
> URL:
>
> http://Master:8088/taskdetails.jsp?jobid=job_1538993274007_0002&tipid=task_1538993274007_0002_r_000000
> <http://master:8088/taskdetails.jsp?jobid=job_1538993274007_0002&tipid=task_1538993274007_0002_r_000000>
> -----
> Diagnostic Messages for this Task:
> Error: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched:
> Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 1.7 sec   HDFS Read:
> 538994 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 1 seconds 700 msec
>
>
> 不知道问题出现在那里,有大神知道吗?求解答,感谢
>
>


-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to