jilai liu created KYLIN-3494:
--------------------------------
Summary: build cube with spark ArrayIndexOutOfBoundsException
Key: KYLIN-3494
URL: https://issues.apache.org/jira/browse/KYLIN-3494
Project: Kylin
Issue Type: Bug
Components: Job Engine
Affects Versions: v2.4.0
Reporter: jilai liu
Fix For: v1.6.0
Logged in as: dr.who
Application
About
Jobs
Tools
Log Type: stderr
Log Upload Time: Mon Aug 13 15:50:10 +0800 2018
Log Length: 74544
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/data3/test/data/hadoop/hdfs/data/usercache/hadoop/filecache/17809/__spark_libs__6649521663189541594.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/data1/test/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/08/13 15:49:38 INFO util.SignalUtils: Registered signal handler for TERM
18/08/13 15:49:38 INFO util.SignalUtils: Registered signal handler for HUP
18/08/13 15:49:38 INFO util.SignalUtils: Registered signal handler for INT
18/08/13 15:49:38 INFO yarn.ApplicationMaster: Preparing Local resources
18/08/13 15:49:39 INFO yarn.ApplicationMaster: ApplicationAttemptId:
appattempt_1533616206085_5657_000001
18/08/13 15:49:39 INFO spark.SecurityManager: Changing view acls to: hadoop
18/08/13 15:49:39 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/08/13 15:49:39 INFO spark.SecurityManager: Changing view acls groups to:
18/08/13 15:49:39 INFO spark.SecurityManager: Changing modify acls groups to:
18/08/13 15:49:39 INFO spark.SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(hadoop); groups
with view permissions: Set(); users with modify permissions: Set(hadoop);
groups with modify permissions: Set()
18/08/13 15:49:39 INFO yarn.ApplicationMaster: Starting the user application in
a separate Thread
18/08/13 15:49:39 INFO yarn.ApplicationMaster: Waiting for spark context
initialization...
18/08/13 15:49:39 INFO spark.SparkContext: Running Spark version 2.1.2
18/08/13 15:49:39 INFO spark.SecurityManager: Changing view acls to: hadoop
18/08/13 15:49:39 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/08/13 15:49:39 INFO spark.SecurityManager: Changing view acls groups to:
18/08/13 15:49:39 INFO spark.SecurityManager: Changing modify acls groups to:
18/08/13 15:49:39 INFO spark.SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(hadoop); groups
with view permissions: Set(); users with modify permissions: Set(hadoop);
groups with modify permissions: Set()
18/08/13 15:49:40 INFO util.Utils: Successfully started service 'sparkDriver'
on port 40358.
18/08/13 15:49:40 INFO spark.SparkEnv: Registering MapOutputTracker
18/08/13 15:49:40 INFO spark.SparkEnv: Registering BlockManagerMaster
18/08/13 15:49:40 INFO storage.BlockManagerMasterEndpoint: Using
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/08/13 15:49:40 INFO storage.BlockManagerMasterEndpoint:
BlockManagerMasterEndpoint up
18/08/13 15:49:40 INFO storage.DiskBlockManager: Created local directory at
/data1/test/data/hadoop/hdfs/data/usercache/hadoop/appcache/application_1533616206085_5657/blockmgr-4cd0fed9-78ae-4e2c-826d-b42a8d6364d2
18/08/13 15:49:40 INFO storage.DiskBlockManager: Created local directory at
/data2/test/data/hadoop/hdfs/data/usercache/hadoop/appcache/application_1533616206085_5657/blockmgr-8cca22e9-ece0-469b-b7fa-3cd9567504d9
18/08/13 15:49:40 INFO storage.DiskBlockManager: Created local directory at
/data3/test/data/hadoop/hdfs/data/usercache/hadoop/appcache/application_1533616206085_5657/blockmgr-15f5073d-5ea2-4766-ab22-b8c68834fb80
18/08/13 15:49:40 INFO memory.MemoryStore: MemoryStore started with capacity
305.3 MB
18/08/13 15:49:40 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/08/13 15:49:40 INFO util.log: Logging initialized @2958ms
18/08/13 15:49:40 INFO ui.JettyUtils: Adding filter:
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/08/13 15:49:40 INFO server.Server: jetty-9.2.z-SNAPSHOT
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@2506206a\{/jobs,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@7f1b8616\{/jobs/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@5001120\{/jobs/job,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@4a662152\{/jobs/job/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@5ef75d04\{/stages,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@7dff5bfa\{/stages/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@78f3dc74\{/stages/stage,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@3e40e89\{/stages/stage/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@679f6c8c\{/stages/pool,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@60b8cb0e\{/stages/pool/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@64eab11\{/storage,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@25fd6d17\{/storage/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@399a8e28\{/storage/rdd,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@1fdcd2ae\{/storage/rdd/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@6c39f467\{/environment,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@22e2b922\{/environment/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@4fe49898\{/executors,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@36d46a68\{/executors/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@15ed2a19\{/executors/threadDump,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@22f1aa2f\{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@493ad6b1\{/static,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@5ca862d4\{/,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@206926ba\{/api,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@f66404a\{/jobs/job/kill,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@10f69049\{/stages/stage/kill,null,AVAILABLE,@Spark}
18/08/13 15:49:40 INFO server.ServerConnector: Started
Spark@6e45bafc\{HTTP/1.1}{0.0.0.0:36502}
18/08/13 15:49:40 INFO server.Server: Started @3103ms
18/08/13 15:49:40 INFO util.Utils: Successfully started service 'SparkUI' on
port 36502.
18/08/13 15:49:40 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at
http://172.16.19.79:36502
18/08/13 15:49:40 INFO cluster.YarnClusterScheduler: Created
YarnClusterScheduler
18/08/13 15:49:40 INFO cluster.SchedulerExtensionServices: Starting Yarn
extension services with app application_1533616206085_5657 and attemptId
Some(appattempt_1533616206085_5657_000001)
18/08/13 15:49:40 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 46273.
18/08/13 15:49:40 INFO netty.NettyBlockTransferService: Server created on
172.16.19.79:46273
18/08/13 15:49:40 INFO storage.BlockManager: Using
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication
policy
18/08/13 15:49:40 INFO storage.BlockManagerMaster: Registering BlockManager
BlockManagerId(driver, 172.16.19.79, 46273, None)
18/08/13 15:49:40 INFO storage.BlockManagerMasterEndpoint: Registering block
manager 172.16.19.79:46273 with 305.3 MB RAM, BlockManagerId(driver,
172.16.19.79, 46273, None)
18/08/13 15:49:40 INFO storage.BlockManagerMaster: Registered BlockManager
BlockManagerId(driver, 172.16.19.79, 46273, None)
18/08/13 15:49:40 INFO storage.BlockManager: Initialized BlockManager:
BlockManagerId(driver, 172.16.19.79, 46273, None)
18/08/13 15:49:40 INFO handler.ContextHandler: Started
o.s.j.s.ServletContextHandler@636d84f8\{/metrics/json,null,AVAILABLE,@Spark}
18/08/13 15:49:41 INFO scheduler.EventLoggingListener: Logging events to
hdfs:///kylin/spark-history/application_1533616206085_5657_1
18/08/13 15:49:41 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster registered as
NettyRpcEndpointRef(spark://[email protected]:40358)
18/08/13 15:49:41 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
env:
CLASSPATH ->
\{{PWD}}<CPS>\{{PWD}}/__spark_conf__<CPS>\{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_YARN_STAGING_DIR ->
hdfs://test-online/user/hadoop/.sparkStaging/application_1533616206085_5657
SPARK_USER -> hadoop
SPARK_YARN_MODE -> true
command:
\{{JAVA_HOME}}/bin/java \
-server \
-Xmx8192m \
'-Dhdp.version=current' \
-Djava.io.tmpdir=\{{PWD}}/tmp \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://[email protected]:40358 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
2 \
--app-id \
application_1533616206085_5657 \
--user-class-path \
file:$PWD/__app__.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr
resources:
__app__.jar -> resource \{ scheme: "hdfs" host: "test-online" port: -1 file:
"/user/hadoop/.sparkStaging/application_1533616206085_5657/kylin-job-2.4.0.jar"
} size: 34310788 timestamp: 1534146575706 type: FILE visibility: PRIVATE
__spark_libs__ -> resource \{ scheme: "hdfs" host: "test-online" port: -1
file:
"/user/hadoop/.sparkStaging/application_1533616206085_5657/__spark_libs__6649521663189541594.zip"
} size: 200815831 timestamp: 1534146575510 type: ARCHIVE visibility: PRIVATE
__spark_conf__ -> resource \{ scheme: "hdfs" host: "test-online" port: -1
file:
"/user/hadoop/.sparkStaging/application_1533616206085_5657/__spark_conf__.zip"
} size: 142316 timestamp: 1534146575796 type: ARCHIVE visibility: PRIVATE
===============================================================================
18/08/13 15:49:41 INFO yarn.YarnRMClient: Registering the ApplicationMaster
18/08/13 15:49:41 INFO client.ConfiguredRMFailoverProxyProvider: Failing over
to rm2
18/08/13 15:49:41 INFO yarn.YarnAllocator: Will request 10 executor
container(s), each with 2 core(s) and 9011 MB memory (including 819 MB of
overhead)
18/08/13 15:49:41 INFO yarn.YarnAllocator: Submitted 10 unlocalized container
requests.
18/08/13 15:49:41 INFO yarn.ApplicationMaster: Started progress reporter thread
with (heartbeat : 3000, initial allocation : 200) intervals
18/08/13 15:49:41 INFO impl.AMRMClientImpl: Received new token for :
hadoop073:46083
18/08/13 15:49:41 INFO impl.AMRMClientImpl: Received new token for :
hadoop057:36259
18/08/13 15:49:41 INFO impl.AMRMClientImpl: Received new token for :
hadoop062:33335
18/08/13 15:49:41 INFO impl.AMRMClientImpl: Received new token for :
hadoop067:44632
18/08/13 15:49:41 INFO impl.AMRMClientImpl: Received new token for :
hadoop075:39257
18/08/13 15:49:41 INFO impl.AMRMClientImpl: Received new token for :
hadoop051:45512
18/08/13 15:49:41 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000004 on host hadoop073
18/08/13 15:49:41 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000005 on host hadoop057
18/08/13 15:49:41 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000007 on host hadoop062
18/08/13 15:49:41 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000009 on host hadoop067
18/08/13 15:49:41 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000010 on host hadoop075
18/08/13 15:49:41 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000011 on host hadoop051
18/08/13 15:49:41 INFO yarn.YarnAllocator: Received 6 containers from YARN,
launching executors on 6 of them.
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop057:36259
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop067:44632
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop073:46083
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop075:39257
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop051:45512
18/08/13 15:49:41 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop062:33335
18/08/13 15:49:42 INFO impl.AMRMClientImpl: Received new token for :
hadoop070:42023
18/08/13 15:49:42 INFO impl.AMRMClientImpl: Received new token for :
hadoop063:34154
18/08/13 15:49:42 INFO impl.AMRMClientImpl: Received new token for :
hadoop053:33601
18/08/13 15:49:42 INFO impl.AMRMClientImpl: Received new token for :
hadoop079:40497
18/08/13 15:49:42 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000013 on host hadoop070
18/08/13 15:49:42 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000017 on host hadoop063
18/08/13 15:49:42 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000018 on host hadoop053
18/08/13 15:49:42 INFO yarn.YarnAllocator: Launching container
container_e79_1533616206085_5657_01_000020 on host hadoop079
18/08/13 15:49:42 INFO yarn.YarnAllocator: Received 4 containers from YARN,
launching executors on 4 of them.
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop079:40497
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop070:42023
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop063:34154
18/08/13 15:49:42 INFO impl.ContainerManagementProtocolProxy: Opening proxy :
hadoop053:33601
18/08/13 15:49:44 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.79:39942) with ID 10
18/08/13 15:49:44 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop079:44235 with 3.4 GB RAM, BlockManagerId(10, hadoop079, 44235,
None)
18/08/13 15:49:45 INFO impl.AMRMClientImpl: Received new token for :
hadoop060:39607
18/08/13 15:49:45 INFO impl.AMRMClientImpl: Received new token for :
hadoop084:38240
18/08/13 15:49:45 INFO impl.AMRMClientImpl: Received new token for :
hadoop061:40680
18/08/13 15:49:45 INFO impl.AMRMClientImpl: Received new token for :
hadoop074:38414
18/08/13 15:49:45 INFO yarn.YarnAllocator: Received 4 containers from YARN,
launching executors on 0 of them.
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.57:35082) with ID 2
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop057:40961 with 3.4 GB RAM, BlockManagerId(2, hadoop057, 40961,
None)
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.51:35812) with ID 6
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.75:45130) with ID 5
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.63:38512) with ID 8
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop075:33510 with 3.4 GB RAM, BlockManagerId(5, hadoop075, 33510,
None)
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop063:46329 with 3.4 GB RAM, BlockManagerId(8, hadoop063, 46329,
None)
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.73:33060) with ID 1
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.53:38988) with ID 9
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop051:43539 with 3.4 GB RAM, BlockManagerId(6, hadoop051, 43539,
None)
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop053:40239 with 3.4 GB RAM, BlockManagerId(9, hadoop053, 40239,
None)
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.67:33364) with ID 4
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop073:39426 with 3.4 GB RAM, BlockManagerId(1, hadoop073, 39426,
None)
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop067:37637 with 3.4 GB RAM, BlockManagerId(4, hadoop067, 37637,
None)
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.62:37576) with ID 3
18/08/13 15:49:45 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is
ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/08/13 15:49:45 INFO cluster.YarnClusterScheduler:
YarnClusterScheduler.postStartHook done
18/08/13 15:49:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint:
Registered executor NettyRpcEndpointRef(null) (172.16.19.70:59750) with ID 7
18/08/13 15:49:45 INFO common.AbstractHadoopJob: Ready to load KylinConfig from
uri:
kylin_metadata@hdfs,path=hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/commonlog_andriod_cube7/metadata
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop062:34791 with 3.4 GB RAM, BlockManagerId(3, hadoop062, 34791,
None)
18/08/13 15:49:45 INFO storage.BlockManagerMasterEndpoint: Registering block
manager hadoop070:34116 with 3.4 GB RAM, BlockManagerId(7, hadoop070, 34116,
None)
18/08/13 15:49:46 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.cube.CubeManager
18/08/13 15:49:46 INFO cube.CubeManager: Initializing CubeManager with config
null
18/08/13 15:49:46 INFO persistence.ResourceStore: Using metadata url
kylin_metadata@hdfs,path=hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/commonlog_andriod_cube7/metadata
for resource store
18/08/13 15:49:46 INFO persistence.HDFSResourceStore: hdfs meta path :
hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/commonlog_andriod_cube7/metadata
18/08/13 15:49:46 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.cube.CubeDescManager
18/08/13 15:49:46 INFO cube.CubeDescManager: Initializing CubeDescManager with
config
kylin_metadata@hdfs,path=hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/commonlog_andriod_cube7/metadata
18/08/13 15:49:46 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.metadata.project.ProjectManager
18/08/13 15:49:46 INFO project.ProjectManager: Initializing ProjectManager with
metadata url
kylin_metadata@hdfs,path=hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/commonlog_andriod_cube7/metadata
18/08/13 15:49:46 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.metadata.cachesync.Broadcaster
18/08/13 15:49:46 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.metadata.model.DataModelManager
18/08/13 15:49:46 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.metadata.TableMetadataManager
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: Checking custom measure
types from kylin config
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering
COUNT_DISTINCT(hllc), class
org.apache.kylin.measure.hllc.HLLCMeasureType$Factory
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering
COUNT_DISTINCT(bitmap), class
org.apache.kylin.measure.bitmap.BitmapMeasureType$Factory
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering TOP_N(topn),
class org.apache.kylin.measure.topn.TopNMeasureType$Factory
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering RAW(raw), class
org.apache.kylin.measure.raw.RawMeasureType$Factory
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering
EXTENDED_COLUMN(extendedcolumn), class
org.apache.kylin.measure.extendedcolumn.ExtendedColumnMeasureType$Factory
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering
PERCENTILE_APPROX(percentile), class
org.apache.kylin.measure.percentile.PercentileMeasureType$Factory
18/08/13 15:49:46 INFO measure.MeasureTypeFactory: registering
COUNT_DISTINCT(dim_dc), class
org.apache.kylin.measure.dim.DimCountDistinctMeasureType$Factory
18/08/13 15:49:46 INFO model.DataModelManager: Model commonlog_iphone_model is
missing or unloaded yet
18/08/13 15:49:46 INFO model.DataModelManager: Model commonlog_andriod_model is
missing or unloaded yet
18/08/13 15:49:46 INFO spark.SparkCubingByLayer: RDD input path:
hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/kylin_intermediate_commonlog_andriod_cube7_d1db5d22_cf49_48fc_b9c0_54e8e70cf362
18/08/13 15:49:46 INFO spark.SparkCubingByLayer: RDD Output path:
hdfs://test-online/kylin/kylin_metadata/kylin-ed7d5d0d-1007-404d-8512-73e946cfaa73/commonlog_andriod_cube7/cuboid/
18/08/13 15:49:46 INFO zlib.ZlibFactory: Successfully loaded & initialized
native-zlib library
18/08/13 15:49:46 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
18/08/13 15:49:46 INFO spark.SparkCubingByLayer: All measure are normal (agg on
all cuboids) ? : true
18/08/13 15:49:47 INFO memory.MemoryStore: Block broadcast_0 stored as values
in memory (estimated size 297.1 KB, free 305.0 MB)
18/08/13 15:49:47 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as
bytes in memory (estimated size 25.5 KB, free 304.9 MB)
18/08/13 15:49:47 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on 172.16.19.79:46273 (size: 25.5 KB, free: 305.2 MB)
18/08/13 15:49:47 INFO spark.SparkContext: Created broadcast 0 from
sequenceFile at SparkCubingByLayer.java:182
18/08/13 15:49:47 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.cube.cuboid.CuboidManager
18/08/13 15:49:47 INFO common.KylinConfig: Creating new manager instance of
class org.apache.kylin.dict.DictionaryManager
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/APP_VERSION/f0950bcb-7ec1-40b2-b7aa-394989976397.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/PID/b91eeb9d-a818-4648-816d-8ec209a97641.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/LANGUAGE/f63e6df9-73b8-47c1-afa4-2d5bd43926c3.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/OS_VERSION/2c3fa118-448e-45b4-b8c7-c8e2c5d3090c.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/VALUE/c1e34ccc-68a2-4acb-a643-8d17c3067e74.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/DU/e125650e-2cb8-4b51-ba99-739476b08d33.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/KEY/b779f6d9-23a6-45e2-8dbf-83f898670ec2.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/NET/dba93801-f8cc-4958-97ba-e2cdeec7abd4.dict
18/08/13 15:49:47 INFO dict.DictionaryManager: DictionaryManager(258090034)
loading DictionaryInfo(loadDictObj:true) at
/dict/ODS.COMMONLOG_ANDROID/CITYID/ebadaeae-c3dd-45d6-9cb8-4c0ecef82cee.dict
18/08/13 15:49:47 INFO common.CubeStatsReader: Estimating size for layer 0, all
cuboids are 511, total size is 449.9567413330078
18/08/13 15:49:47 INFO spark.SparkCubingByLayer: Partition for spark cubing: 44
18/08/13 15:49:47 INFO output.FileOutputCommitter: File Output Committer
Algorithm version is 1
18/08/13 15:49:47 INFO spark.SparkContext: Starting job:
saveAsNewAPIHadoopDataset at SparkCubingByLayer.java:277
18/08/13 15:49:47 INFO mapred.FileInputFormat: Total input paths to process : 44
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack6/172.16.19.86:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/default/rack/172.16.19.99:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/default/rack/172.16.19.106:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack4/172.16.19.73:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack4/172.16.19.74:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack5/172.16.19.84:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/default/rack/172.16.19.103:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack4/172.16.19.71:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack4/172.16.19.72:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack2/172.16.19.59:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack1/172.16.19.51:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack1/172.16.19.50:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack5/172.16.19.81:50010
18/08/13 15:49:48 INFO net.NetworkTopology: Adding a new node:
/dc1/rack5/172.16.19.83:50010
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Registering RDD 3 (mapToPair at
SparkCubingByLayer.java:182)
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Got job 0
(saveAsNewAPIHadoopDataset at SparkCubingByLayer.java:277) with 44 output
partitions
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Final stage: ResultStage 1
(saveAsNewAPIHadoopDataset at SparkCubingByLayer.java:277)
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Parents of final stage:
List(ShuffleMapStage 0)
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Missing parents:
List(ShuffleMapStage 0)
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0
(MapPartitionsRDD[3] at mapToPair at SparkCubingByLayer.java:182), which has no
missing parents
18/08/13 15:49:48 INFO memory.MemoryStore: Block broadcast_1 stored as values
in memory (estimated size 79.7 KB, free 304.9 MB)
18/08/13 15:49:48 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as
bytes in memory (estimated size 30.0 KB, free 304.8 MB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on 172.16.19.79:46273 (size: 30.0 KB, free: 305.2 MB)
18/08/13 15:49:48 INFO spark.SparkContext: Created broadcast 1 from broadcast
at DAGScheduler.scala:996
18/08/13 15:49:48 INFO scheduler.DAGScheduler: Submitting 105 missing tasks
from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at
SparkCubingByLayer.java:182)
18/08/13 15:49:48 INFO cluster.YarnClusterScheduler: Adding task set 0.0 with
105 tasks
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 14.0 in stage
0.0 (TID 0, hadoop067, executor 4, partition 14, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0
(TID 1, hadoop075, executor 5, partition 4, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 10.0 in stage
0.0 (TID 2, hadoop057, executor 2, partition 10, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 13.0 in stage
0.0 (TID 3, hadoop053, executor 9, partition 13, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 11.0 in stage
0.0 (TID 4, hadoop070, executor 7, partition 11, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 16.0 in stage
0.0 (TID 5, hadoop062, executor 3, partition 16, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 18.0 in stage
0.0 (TID 6, hadoop079, executor 10, partition 18, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 31.0 in stage
0.0 (TID 7, hadoop063, executor 8, partition 31, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 25.0 in stage
0.0 (TID 8, hadoop073, executor 1, partition 25, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0
(TID 9, hadoop075, executor 5, partition 5, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 45.0 in stage
0.0 (TID 10, hadoop057, executor 2, partition 45, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 22.0 in stage
0.0 (TID 11, hadoop053, executor 9, partition 22, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 19.0 in stage
0.0 (TID 12, hadoop070, executor 7, partition 19, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 42.0 in stage
0.0 (TID 13, hadoop062, executor 3, partition 42, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 21.0 in stage
0.0 (TID 14, hadoop079, executor 10, partition 21, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 33.0 in stage
0.0 (TID 15, hadoop063, executor 8, partition 33, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO scheduler.TaskSetManager: Starting task 26.0 in stage
0.0 (TID 16, hadoop073, executor 1, partition 26, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop079:44235 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop053:40239 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop073:39426 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop067:37637 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop070:34116 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop063:46329 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop075:33510 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop062:34791 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:48 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop057:40961 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop062:34791 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop075:33510 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop070:34116 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop067:37637 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop073:39426 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop063:46329 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop057:40961 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop053:40239 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:49 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in
memory on hadoop079:44235 (size: 25.5 KB, free: 3.4 GB)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 15.0 in stage
0.0 (TID 17, hadoop067, executor 4, partition 15, RACK_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0
(TID 18, hadoop051, executor 6, partition 1, RACK_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 12.0 in stage
0.0 (TID 19, hadoop051, executor 6, partition 12, RACK_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0
(TID 20, hadoop075, executor 5, partition 6, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0
(TID 21, hadoop075, executor 5, partition 7, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 0.0
(TID 1, hadoop075, executor 5): java.lang.ArrayIndexOutOfBoundsException: 8
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.getCell(BaseCuboidBuilder.java:167)
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.buildKey(BaseCuboidBuilder.java:116)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:343)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:307)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 5.0 in stage 0.0
(TID 9) on hadoop075, executor 5: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 1]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 20.0 in stage
0.0 (TID 22, hadoop070, executor 7, partition 20, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 19.0 in stage 0.0
(TID 12) on hadoop070, executor 7: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 2]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 19.1 in stage
0.0 (TID 23, hadoop070, executor 7, partition 19, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 11.0 in stage 0.0
(TID 4) on hadoop070, executor 7: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 3]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 14.0 in stage 0.0
(TID 0) on hadoop067, executor 4: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 4]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 14.1 in stage
0.0 (TID 24, hadoop067, executor 4, partition 14, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 50.0 in stage
0.0 (TID 25, hadoop079, executor 10, partition 50, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 21.0 in stage 0.0
(TID 14) on hadoop079, executor 10: java.lang.ArrayIndexOutOfBoundsException
(8) [duplicate 5]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 52.0 in stage
0.0 (TID 26, hadoop057, executor 2, partition 52, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 45.0 in stage 0.0
(TID 10) on hadoop057, executor 2: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 6]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 45.1 in stage
0.0 (TID 27, hadoop057, executor 2, partition 45, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 10.0 in stage 0.0
(TID 2) on hadoop057, executor 2: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 7]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 15.0 in stage 0.0
(TID 17) on hadoop067, executor 4: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 8]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 10.1 in stage
0.0 (TID 28, hadoop062, executor 3, partition 10, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 42.0 in stage 0.0
(TID 13) on hadoop062, executor 3: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 9]
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Starting task 42.1 in stage
0.0 (TID 29, hadoop062, executor 3, partition 42, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:51 INFO scheduler.TaskSetManager: Lost task 16.0 in stage 0.0
(TID 5) on hadoop062, executor 3: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 10]
18/08/13 15:49:52 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in
memory on hadoop051:43539 (size: 30.0 KB, free: 3.4 GB)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 14.1 in stage 0.0
(TID 24) on hadoop067, executor 4: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 11]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 14.2 in stage
0.0 (TID 30, hadoop067, executor 4, partition 14, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 11.1 in stage
0.0 (TID 31, hadoop075, executor 5, partition 11, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 7.0 in stage 0.0
(TID 21) on hadoop075, executor 5: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 12]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 7.1 in stage 0.0
(TID 32, hadoop075, executor 5, partition 7, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 6.0 in stage 0.0
(TID 20) on hadoop075, executor 5: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 13]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 27.0 in stage
0.0 (TID 33, hadoop073, executor 1, partition 27, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 26.0 in stage 0.0
(TID 16) on hadoop073, executor 1: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 14]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 26.1 in stage
0.0 (TID 34, hadoop073, executor 1, partition 26, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 90.0 in stage
0.0 (TID 35, hadoop063, executor 8, partition 90, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 25.0 in stage 0.0
(TID 8) on hadoop073, executor 1: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 15]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 33.0 in stage 0.0
(TID 15) on hadoop063, executor 8: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 16]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 21.1 in stage
0.0 (TID 36, hadoop070, executor 7, partition 21, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 20.0 in stage 0.0
(TID 22) on hadoop070, executor 7: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 17]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 75.0 in stage
0.0 (TID 37, hadoop057, executor 2, partition 75, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 52.0 in stage 0.0
(TID 26) on hadoop057, executor 2: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 18]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 20.1 in stage
0.0 (TID 38, hadoop070, executor 7, partition 20, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 19.1 in stage 0.0
(TID 23) on hadoop070, executor 7: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 19]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 52.1 in stage
0.0 (TID 39, hadoop057, executor 2, partition 52, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 45.1 in stage 0.0
(TID 27) on hadoop057, executor 2: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 20]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 50.0 in stage 0.0
(TID 25) on hadoop079, executor 10: java.lang.ArrayIndexOutOfBoundsException
(8) [duplicate 21]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 50.1 in stage
0.0 (TID 40, hadoop079, executor 10, partition 50, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 33.1 in stage
0.0 (TID 41, hadoop063, executor 8, partition 33, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 31.0 in stage 0.0
(TID 7) on hadoop063, executor 8: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 22]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 18.0 in stage 0.0
(TID 6) on hadoop079, executor 10: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 23]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 18.1 in stage
0.0 (TID 42, hadoop079, executor 10, partition 18, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 19.2 in stage
0.0 (TID 43, hadoop070, executor 7, partition 19, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 21.1 in stage 0.0
(TID 36) on hadoop070, executor 7: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 24]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 14.2 in stage 0.0
(TID 30) on hadoop067, executor 4: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 25]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 14.3 in stage
0.0 (TID 44, hadoop067, executor 4, partition 14, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 31.1 in stage
0.0 (TID 45, hadoop063, executor 8, partition 31, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 90.0 in stage 0.0
(TID 35) on hadoop063, executor 8: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 26]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 16.1 in stage
0.0 (TID 46, hadoop062, executor 3, partition 16, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 42.1 in stage 0.0
(TID 29) on hadoop062, executor 3: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 27]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 45.2 in stage
0.0 (TID 47, hadoop057, executor 2, partition 45, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 52.1 in stage 0.0
(TID 39) on hadoop057, executor 2: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 28]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 42.2 in stage
0.0 (TID 48, hadoop062, executor 3, partition 42, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 10.1 in stage 0.0
(TID 28) on hadoop062, executor 3: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 29]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 10.2 in stage
0.0 (TID 49, hadoop075, executor 5, partition 10, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 7.1 in stage 0.0
(TID 32) on hadoop075, executor 5: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 30]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 21.2 in stage
0.0 (TID 50, hadoop070, executor 7, partition 21, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 20.1 in stage 0.0
(TID 38) on hadoop070, executor 7: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 31]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 7.2 in stage 0.0
(TID 51, hadoop075, executor 5, partition 7, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 11.1 in stage 0.0
(TID 31) on hadoop075, executor 5: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 32]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 52.2 in stage
0.0 (TID 52, hadoop057, executor 2, partition 52, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 75.0 in stage 0.0
(TID 37) on hadoop057, executor 2: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 33]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 35.0 in stage
0.0 (TID 53, hadoop053, executor 9, partition 35, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 22.0 in stage 0.0
(TID 11) on hadoop053, executor 9: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 34]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 22.1 in stage
0.0 (TID 54, hadoop053, executor 9, partition 22, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 13.0 in stage 0.0
(TID 3) on hadoop053, executor 9: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 35]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 90.1 in stage
0.0 (TID 55, hadoop073, executor 1, partition 90, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 26.1 in stage 0.0
(TID 34) on hadoop073, executor 1: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 36]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Starting task 26.2 in stage
0.0 (TID 56, hadoop073, executor 1, partition 26, NODE_LOCAL, 6146 bytes)
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 27.0 in stage 0.0
(TID 33) on hadoop073, executor 1: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 37]
18/08/13 15:49:52 INFO scheduler.TaskSetManager: Lost task 14.3 in stage 0.0
(TID 44) on hadoop067, executor 4: java.lang.ArrayIndexOutOfBoundsException (8)
[duplicate 38]
18/08/13 15:49:52 ERROR scheduler.TaskSetManager: Task 14 in stage 0.0 failed 4
times; aborting job
18/08/13 15:49:52 INFO cluster.YarnClusterScheduler: Cancelling stage 0
18/08/13 15:49:52 INFO cluster.YarnClusterScheduler: Stage 0 was cancelled
18/08/13 15:49:52 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (mapToPair at
SparkCubingByLayer.java:182) failed in 4.198 s due to Job aborted due to stage
failure: Task 14 in stage 0.0 failed 4 times, most recent failure: Lost task
14.3 in stage 0.0 (TID 44, hadoop067, executor 4):
java.lang.ArrayIndexOutOfBoundsException: 8
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.getCell(BaseCuboidBuilder.java:167)
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.buildKey(BaseCuboidBuilder.java:116)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:343)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:307)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
18/08/13 15:49:52 INFO scheduler.DAGScheduler: Job 0 failed:
saveAsNewAPIHadoopDataset at SparkCubingByLayer.java:277, took 4.768935 s
18/08/13 15:49:52 ERROR yarn.ApplicationMaster: User class threw exception:
java.lang.RuntimeException: error execute
org.apache.kylin.engine.spark.SparkCubingByLayer
java.lang.RuntimeException: error execute
org.apache.kylin.engine.spark.SparkCubingByLayer
at
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:636)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure:
Task 14 in stage 0.0 failed 4 times, most recent failure: Lost task 14.3 in
stage 0.0 (TID 44, hadoop067, executor 4):
java.lang.ArrayIndexOutOfBoundsException: 8
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.getCell(BaseCuboidBuilder.java:167)
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.buildKey(BaseCuboidBuilder.java:116)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:343)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:307)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1158)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1085)
at
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:831)
at
org.apache.kylin.engine.spark.SparkCubingByLayer.saveToHDFS(SparkCubingByLayer.java:277)
at
org.apache.kylin.engine.spark.SparkCubingByLayer.execute(SparkCubingByLayer.java:230)
at
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
... 6 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 8
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.getCell(BaseCuboidBuilder.java:167)
at
org.apache.kylin.engine.mr.common.BaseCuboidBuilder.buildKey(BaseCuboidBuilder.java:116)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:343)
at
org.apache.kylin.engine.spark.SparkCubingByLayer$EncodeBaseCuboid.call(SparkCubingByLayer.java:307)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at
org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1043)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
18/08/13 15:49:52 INFO yarn.ApplicationMaster: Final app status: FAILED,
exitCode: 15, (reason: User class threw exception: java.lang.RuntimeException:
error execute org.apache.kylin.engine.spark.SparkCubingByLayer)
18/08/13 15:49:52 INFO spark.SparkContext: Invoking stop() from shutdown hook
18/08/13 15:49:52 INFO server.ServerConnector: Stopped
Spark@6e45bafc\{HTTP/1.1}{0.0.0.0:0}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@10f69049\{/stages/stage/kill,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@f66404a\{/jobs/job/kill,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@206926ba\{/api,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@5ca862d4\{/,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@493ad6b1\{/static,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@22f1aa2f\{/executors/threadDump/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@15ed2a19\{/executors/threadDump,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@36d46a68\{/executors/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@4fe49898\{/executors,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@22e2b922\{/environment/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@6c39f467\{/environment,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@1fdcd2ae\{/storage/rdd/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@399a8e28\{/storage/rdd,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@25fd6d17\{/storage/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@64eab11\{/storage,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@60b8cb0e\{/stages/pool/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@679f6c8c\{/stages/pool,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@3e40e89\{/stages/stage/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@78f3dc74\{/stages/stage,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@7dff5bfa\{/stages/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@5ef75d04\{/stages,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@4a662152\{/jobs/job/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@5001120\{/jobs/job,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@7f1b8616\{/jobs/json,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO handler.ContextHandler: Stopped
o.s.j.s.ServletContextHandler@2506206a\{/jobs,null,UNAVAILABLE,@Spark}
18/08/13 15:49:52 INFO ui.SparkUI: Stopped Spark web UI at
http://172.16.19.79:36502
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 16.1 in stage 0.0
(TID 46, hadoop062, executor 3): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@5725003b,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 45.2 in stage 0.0
(TID 47, hadoop057, executor 2): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@18373e72,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 22.1 in stage 0.0
(TID 54, hadoop053, executor 9): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@3b7b08b6,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 90.1 in stage 0.0
(TID 55, hadoop073, executor 1): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@1707c5a7,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 35.0 in stage 0.0
(TID 53, hadoop053, executor 9): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@68dc3ba4,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 18.1 in stage 0.0
(TID 42, hadoop079, executor 10): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@49394c63,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 7.2 in stage 0.0
(TID 51, hadoop075, executor 5): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@2a64d34a,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 26.2 in stage 0.0
(TID 56, hadoop073, executor 1): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@367115c,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 21.2 in stage 0.0
(TID 50, hadoop070, executor 7): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@a6f1b8f,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 31.1 in stage 0.0
(TID 45, hadoop063, executor 8): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@93604f5,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 50.1 in stage 0.0
(TID 40, hadoop079, executor 10): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@544681a,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 42.2 in stage 0.0
(TID 48, hadoop062, executor 3): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@3e12741f,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 19.2 in stage 0.0
(TID 43, hadoop070, executor 7): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@4c53b331,null)
18/08/13 15:49:52 WARN scheduler.TaskSetManager: Lost task 52.2 in stage 0.0
(TID 52, hadoop057, executor 2): TaskKilled (killed intentionally)
18/08/13 15:49:52 ERROR scheduler.LiveListenerBus: SparkListenerBus has already
stopped! Dropping event
SparkListenerTaskEnd(0,0,ShuffleMapTask,TaskKilled,org.apache.spark.scheduler.TaskInfo@757b07e2,null)
18/08/13 15:49:52 INFO yarn.YarnAllocator: Driver requested a total number of 0
executor(s).
18/08/13 15:49:52 INFO cluster.YarnClusterSchedulerBackend: Shutting down all
executors
18/08/13 15:49:52 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking
each executor to shut down
18/08/13 15:49:52 INFO cluster.SchedulerExtensionServices: Stopping
SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/08/13 15:49:52 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
18/08/13 15:49:52 ERROR server.TransportRequestHandler: Error while invoking
RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:154)
at
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:134)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:570)
at
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:180)
at
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:109)
at
org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
18/08/13 15:49:52 ERROR server.TransportRequestHandler: Error while invoking
RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:154)
at
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:134)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:570)
at
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:180)
at
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:109)
at
org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
18/08/13 15:49:52 INFO memory.MemoryStore: MemoryStore cleared
18/08/13 15:49:52 INFO storage.BlockManager: BlockManager stopped
18/08/13 15:49:52 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/08/13 15:49:52 INFO
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
18/08/13 15:49:52 INFO spark.SparkContext: Successfully stopped SparkContext
18/08/13 15:49:52 INFO util.ShutdownHookManager: Shutdown hook called
18/08/13 15:49:52 INFO util.ShutdownHookManager: Deleting directory
/data3/test/data/hadoop/hdfs/data/usercache/hadoop/appcache/application_1533616206085_5657/spark-69de25a9-16d2-4eaa-be70-d23ec191776a
18/08/13 15:49:52 INFO util.ShutdownHookManager: Deleting directory
/data2/test/data/hadoop/hdfs/data/usercache/hadoop/appcache/application_1533616206085_5657/spark-a4b4251c-feaf-426a-a363-4159cdc092f6
18/08/13 15:49:52 INFO util.ShutdownHookManager: Deleting directory
/data1/test/data/hadoop/hdfs/data/usercache/hadoop/appcache/application_1533616206085_5657/spark-751dcea0-e9c4-4160-9112-a6fe081ec4fb
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)