To change use lzo:
1) Hive/MR compression: search "SnappyCodec" in conf/*.xml file, replace
them with lzo's codec
2) HBase compression: in conf/kylin.properties, set
"kylin.hbase.default.compression.codec" to "lzo"

2017-04-19 10:56 GMT+08:00 35925138 <[email protected]>:

> 这个问题,我通过设置hive-site.xml里的hive.aux.jars.path,将hadoop-lzo的那几个jar强行写进去,解决的,不过在build
> cube的第三步,有爆出了这样的错误,
> yarn里爆出的错误为:
> Error: java.io.IOException: Unable to initialize any output collector at
> org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412)
> at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81) at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:695)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767) at
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:422) at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1628) at org.apache.hadoop.mapred.
> YarnChild.main(YarnChild.java:158) Container killed by the
> ApplicationMaster. Container killed on request. Exit code is 143 Container
> exited with a non-zero exit code 143
>
> 我查看日志,日志中有这样的信息
>
> 2017-04-19 10:48:15,875 WARN [main] org.apache.hadoop.mapred.MapTask:
> Unable to initialize MapOutputCollector org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> java.lang.IllegalArgumentException: Compression codec
> com.hadoop.compression.lzo.LzoCodec was not found.
>         at org.apache.hadoop.mapred.JobConf.getMapOutputCompressorClass(
> JobConf.java:798)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(
> MapTask.java:1019)
>         at org.apache.hadoop.mapred.MapTask.createSortingCollector(
> MapTask.java:401)
>         at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
>         at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<
> init>(MapTask.java:695)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1628)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.ClassNotFoundException: Class
> com.hadoop.compression.lzo.LzoCodec not found
>         at org.apache.hadoop.conf.Configuration.getClassByName(
> Configuration.java:1980)
>         at org.apache.hadoop.mapred.JobConf.getMapOutputCompressorClass(
> JobConf.java:796)
>         ... 11 more
>
> 我在hadoop中,执行hadoop/bin/hadoop checknative -a
> 给出的信息为:
> 17/04/19 09:55:10 WARN bzip2.Bzip2Factory: Failed to load/initialize
> native-bzip2 library system-native, will use pure-Java version
> 17/04/19 09:55:10 INFO zlib.ZlibFactory: Successfully loaded & initialized
> native-zlib library
> Native library checking:
> hadoop:  true /home/hadooper/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:    true /lib64/libz.so.1
> snappy:  false
> lz4:     true revision:99
> bzip2:   false
> openssl: true /usr/lib64/libcrypto.so
>
> lzo应该是好的,不知道我改怎么处理这个问题。请大神赐教
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "35925138";<[email protected]>;
> *发送时间:* 2017年4月14日(星期五) 下午4:25
> *收件人:* "dev"<[email protected]>;
> *主题:* 回复:答复: 回复:答复: Error: java.io.IOException: Unable to initialize any
> output collector
>
> 这个我确认过了。我的hadoop里的配置文件这个写的是2047,没有超过2048
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "roger shi";<[email protected]>;
> *发送时间:* 2017年4月13日(星期四) 下午4:30
> *收件人:* "dev"<[email protected]>;
> *主题:* 答复: 回复:答复: Error: java.io.IOException: Unable to initialize any
> output collector
>
> It seems the hadoop environment configuration issue.
>
>
> One possible reason is the configuration "mapreduce.task.io.sort.mb" of
> this failed job is too large. For the detailed information please refer to
> https://issues.apache.org/jira/browse/MAPREDUCE-6194.
>
> Bubble up final exception in failures during creation of ...<
> https://issues.apache.org/jira/browse/MAPREDUCE-6194>
> issues.apache.org
> MAPREDUCE-5974 added in ability to instantiate multiple OCs, but if none
> of them are able to load it "throws" only a final a generic message:
> "Unable to initialize ...
>
>
>
> ________________________________
> 发件人: 35925138 <[email protected]>
> 发送时间: 2017年4月13日 15:13:45
> 收件人: dev
> 主题: 回复:答复: Error: java.io.IOException: Unable to initialize any output
> collector
>
> 这就是全的错误信息了。只报了这么些。
>
>
>
>
> ------------------ 原始邮件 ------------------
> 发件人: "roger shi";<[email protected]>;
> 发送时间: 2017年4月13日(星期四) 下午2:51
> 收件人: "dev"<[email protected]>;
>
> 主题: 答复: Error: java.io.IOException: Unable to initialize any output
> collector
>
>
>
> Could you please attach the complete stack trace of the error?
>
> ________________________________
> 发件人: 35925138 <[email protected]>
> 发送时间: 2017年4月13日 13:45:09
> 收件人: dev
> 主题: Error: java.io.IOException: Unable to initialize any output collector
>
> 我的kylin版本是1.6.0 hadoop版本是2.6.0在build cube时,到了Step Name: Redistribute Flat
> Hive Table 这以步时,失败了,
> kylin界面上的日志如下:
> no counters for job job_1492049582227_0011
>
>
> 我到hadoop中的userlog中查看,报出这样的错误
> 2017-04-13 11:23:33,758 ERROR [IPC Server handler 1 on 43772]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
> attempt_1492049582227_0011_m_000000_0 - exited : java.io.IOException:
> Unable to initialize any output collector
>         at org.apache.hadoop.mapred.MapTask.createSortingCollector(
> MapTask.java:412)
>         at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
>         at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<
> init>(MapTask.java:695)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1628)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>
>
> 这个是什么情况,应该怎么解决,请帮忙,谢谢
>
>


-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to