HI all,

We've hit a java.lang.ArrayIndexOutOfBoundsException error while trying to
build a 1-week data Cube using MapReduce - building less days or merging
cubes into 1 week seems fine. We are using Kylin 2.6.4 and HDP-2.6.5.1175.
Retrying hasn't helped neither using a different set of machines (a
different cluster with the same config). One note is that Hive input data
is in Azure Blob Storage.

The error happens in the step: Build N-Dimension Cuboid : level 1

This is the error on the Map step of MapReduce:

Error: java.lang.ArrayIndexOutOfBoundsException at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1453)
at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1349)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153) at
org.apache.hadoop.io.WritableUtils.writeVLong(WritableUtils.java:273) at
org.apache.hadoop.io.WritableUtils.writeVInt(WritableUtils.java:253) at
org.apache.hadoop.io.Text.write(Text.java:330) at
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
at
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1149)
at
org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:715)
at
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at
org.apache.kylin.engine.mr.steps.NDCuboidMapper.doMap(NDCuboidMapper.java:114)
at
org.apache.kylin.engine.mr.steps.NDCuboidMapper.doMap(NDCuboidMapper.java:47)
at org.apache.kylin.engine.mr.KylinMapper.map(KylinMapper.java:77) at
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164) Container
killed by the ApplicationMaster. Container killed on request. Exit code is
143 Container exited with a non-zero exit code 143.


It's a bit generic and I can provide more logs if needed. The question here
is if there is any setting that need to be tuned or likely possible causes?

Thanks,
Matheus

Reply via email to