SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/hadoop/tmp/nm-local-dir/filecache/13/spark-libs.jar/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/hadoop/software/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/01/30 19:05:28 INFO util.SignalUtils: Registered signal handler for TERM
18/01/30 19:05:28 INFO util.SignalUtils: Registered signal handler for HUP
18/01/30 19:05:28 INFO util.SignalUtils: Registered signal handler for INT
18/01/30 19:05:29 INFO yarn.ApplicationMaster: Preparing Local resources
18/01/30 19:05:30 INFO yarn.ApplicationMaster: ApplicationAttemptId: 
appattempt_1516010594436_0185_000002
18/01/30 19:05:30 INFO spark.SecurityManager: Changing view acls to: hadoop
18/01/30 19:05:30 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/01/30 19:05:30 INFO spark.SecurityManager: Changing view acls groups to: 
18/01/30 19:05:30 INFO spark.SecurityManager: Changing modify acls groups to: 
18/01/30 19:05:30 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups 
with view permissions: Set(); users  with modify permissions: Set(hadoop); 
groups with modify permissions: Set()
18/01/30 19:05:30 INFO yarn.ApplicationMaster: Starting the user application in 
a separate Thread
18/01/30 19:05:30 INFO yarn.ApplicationMaster: Waiting for spark context 
initialization...
18/01/30 19:05:30 INFO spark.SparkContext: Running Spark version 2.1.2
18/01/30 19:05:30 INFO spark.SecurityManager: Changing view acls to: hadoop
18/01/30 19:05:30 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/01/30 19:05:30 INFO spark.SecurityManager: Changing view acls groups to: 
18/01/30 19:05:30 INFO spark.SecurityManager: Changing modify acls groups to: 
18/01/30 19:05:30 INFO spark.SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups 
with view permissions: Set(); users  with modify permissions: Set(hadoop); 
groups with modify permissions: Set()
18/01/30 19:05:31 INFO util.Utils: Successfully started service 'sparkDriver' 
on port 42339.
18/01/30 19:05:31 INFO spark.SparkEnv: Registering MapOutputTracker
18/01/30 19:05:31 INFO spark.SparkEnv: Registering BlockManagerMaster
18/01/30 19:05:31 INFO storage.BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/01/30 19:05:31 INFO storage.BlockManagerMasterEndpoint: 
BlockManagerMasterEndpoint up
18/01/30 19:05:31 INFO storage.DiskBlockManager: Created local directory at 
/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/blockmgr-e9044b90-ff87-4ad3-9ae7-cf85a131ba4e
18/01/30 19:05:31 INFO memory.MemoryStore: MemoryStore started with capacity 
366.3 MB
18/01/30 19:05:31 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/01/30 19:05:31 INFO util.log: Logging initialized @3601ms
18/01/30 19:05:31 INFO ui.JettyUtils: Adding filter: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/01/30 19:05:31 INFO server.Server: jetty-9.2.z-SNAPSHOT
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@1727271{/jobs,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@5a8c27d3{/jobs/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@201b73c{/jobs/job,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@6dc9732e{/jobs/job/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@7f3a751c{/stages,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@44336797{/stages/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@4b84807a{/stages/stage,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@6d72da27{/stages/stage/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@41ccc000{/stages/pool,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@1e1c0871{/stages/pool/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@71f4dc98{/storage,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@a2c9363{/storage/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@2d479ee6{/storage/rdd,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@53e654bf{/storage/rdd/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@4215247{/environment,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@6af87e2b{/environment/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@7bd97d3c{/executors,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@1afb9d54{/executors/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@14c64726{/executors/threadDump,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@3de67ab3{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@d66c0ae{/static,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@4bb1ed6b{/,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@6de87c4c{/api,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@6355a0a0{/jobs/job/kill,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@580d6fd4{/stages/stage/kill,null,AVAILABLE,@Spark}
18/01/30 19:05:31 INFO server.ServerConnector: Started 
Spark@36b4f4c5{HTTP/1.1}{0.0.0.0:37939}
18/01/30 19:05:31 INFO server.Server: Started @3740ms
18/01/30 19:05:31 INFO util.Utils: Successfully started service 'SparkUI' on 
port 37939.
18/01/30 19:05:31 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://192.168.1.184:37939
18/01/30 19:05:31 INFO cluster.YarnClusterScheduler: Created 
YarnClusterScheduler
18/01/30 19:05:31 INFO cluster.SchedulerExtensionServices: Starting Yarn 
extension services with app application_1516010594436_0185 and attemptId 
Some(appattempt_1516010594436_0185_000002)
18/01/30 19:05:31 INFO util.Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 36754.
18/01/30 19:05:31 INFO netty.NettyBlockTransferService: Server created on 
192.168.1.184:36754
18/01/30 19:05:31 INFO storage.BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
18/01/30 19:05:31 INFO storage.BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 192.168.1.184, 36754, None)
18/01/30 19:05:31 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager 192.168.1.184:36754 with 366.3 MB RAM, BlockManagerId(driver, 
192.168.1.184, 36754, None)
18/01/30 19:05:31 INFO storage.BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 192.168.1.184, 36754, None)
18/01/30 19:05:31 INFO storage.BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 192.168.1.184, 36754, None)
18/01/30 19:05:31 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@57ef634b{/metrics/json,null,AVAILABLE,@Spark}
18/01/30 19:05:32 INFO scheduler.EventLoggingListener: Logging events to 
hdfs:///kylin/spark-history/application_1516010594436_0185_2
18/01/30 19:05:32 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: 
ApplicationMaster registered as 
NettyRpcEndpointRef(spark://[email protected]:42339)
18/01/30 19:05:32 INFO yarn.ApplicationMaster: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_YARN_STAGING_DIR -> 
hdfs://192.168.1.171:9000/user/hadoop/.sparkStaging/application_1516010594436_0185
    SPARK_USER -> hadoop
    SPARK_YARN_MODE -> true

  command:
    {{JAVA_HOME}}/bin/java \ 
      -server \ 
      -Xmx1024m \ 
      '-Dhdp.version=current' \ 
      -Djava.io.tmpdir={{PWD}}/tmp \ 
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ 
      -XX:OnOutOfMemoryError='kill %p' \ 
      org.apache.spark.executor.CoarseGrainedExecutorBackend \ 
      --driver-url \ 
      spark://[email protected]:42339 \ 
      --executor-id \ 
      <executorId> \ 
      --hostname \ 
      <hostname> \ 
      --cores \ 
      2 \ 
      --app-id \ 
      application_1516010594436_0185 \ 
      --user-class-path \ 
      file:$PWD/__app__.jar \ 
      --user-class-path \ 
      file:$PWD/htrace-core-3.1.0-incubating.jar \ 
      --user-class-path \ 
      file:$PWD/metrics-core-2.2.0.jar \ 
      --user-class-path \ 
      file:$PWD/guava-12.0.1.jar \ 
      1><LOG_DIR>/stdout \ 
      2><LOG_DIR>/stderr

  resources:
    htrace-core-3.1.0-incubating.jar -> resource { scheme: "hdfs" host: 
"192.168.1.171" port: 9000 file: 
"/user/hadoop/.sparkStaging/application_1516010594436_0185/htrace-core-3.1.0-incubating.jar"
 } size: 1475955 timestamp: 1517310297655 type: FILE visibility: PRIVATE
    __app__.jar -> resource { scheme: "hdfs" host: "192.168.1.171" port: 9000 
file: 
"/user/hadoop/.sparkStaging/application_1516010594436_0185/kylin-job-2.2.0.jar" 
} size: 16695259 timestamp: 1517310297574 type: FILE visibility: PRIVATE
    guava-12.0.1.jar -> resource { scheme: "hdfs" host: "192.168.1.171" port: 
9000 file: 
"/user/hadoop/.sparkStaging/application_1516010594436_0185/guava-12.0.1.jar" } 
size: 1795932 timestamp: 1517310297780 type: FILE visibility: PRIVATE
    __spark_conf__ -> resource { scheme: "hdfs" host: "192.168.1.171" port: 
9000 file: 
"/user/hadoop/.sparkStaging/application_1516010594436_0185/__spark_conf__.zip" 
} size: 86203 timestamp: 1517310297898 type: ARCHIVE visibility: PRIVATE
    metrics-core-2.2.0.jar -> resource { scheme: "hdfs" host: "192.168.1.171" 
port: 9000 file: 
"/user/hadoop/.sparkStaging/application_1516010594436_0185/metrics-core-2.2.0.jar"
 } size: 82123 timestamp: 1517310297714 type: FILE visibility: PRIVATE
    __spark_libs__ -> resource { scheme: "hdfs" host: "192.168.1.171" port: 
9000 file: "/kylin/spark/spark-libs.jar" } size: 200781646 timestamp: 
1517295215668 type: ARCHIVE visibility: PUBLIC

===============================================================================
18/01/30 19:05:32 INFO client.RMProxy: Connecting to ResourceManager at 
/192.168.1.171:8030
18/01/30 19:05:32 INFO yarn.YarnRMClient: Registering the ApplicationMaster
18/01/30 19:05:32 INFO yarn.YarnAllocator: Will request 1 executor 
container(s), each with 2 core(s) and 1408 MB memory (including 384 MB of 
overhead)
18/01/30 19:05:32 INFO yarn.YarnAllocator: Submitted 1 unlocalized container 
requests.
18/01/30 19:05:32 INFO yarn.ApplicationMaster: Started progress reporter thread 
with (heartbeat : 3000, initial allocation : 200) intervals
18/01/30 19:05:32 INFO impl.AMRMClientImpl: Received new token for : 
sx-hadoop-dn02:32911
18/01/30 19:05:32 INFO yarn.YarnAllocator: Launching container 
container_1516010594436_0185_02_000002 on host sx-hadoop-dn02
18/01/30 19:05:32 INFO yarn.YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
18/01/30 19:05:32 INFO impl.ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
18/01/30 19:05:32 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
sx-hadoop-dn02:32911
18/01/30 19:05:36 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: 
Registered executor NettyRpcEndpointRef(null) (192.168.1.181:54402) with ID 1
18/01/30 19:05:36 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is 
ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/01/30 19:05:36 INFO cluster.YarnClusterScheduler: 
YarnClusterScheduler.postStartHook done
18/01/30 19:05:36 INFO storage.BlockManagerMasterEndpoint: Registering block 
manager sx-hadoop-dn02:42882 with 366.3 MB RAM, BlockManagerId(1, 
sx-hadoop-dn02, 42882, None)
18/01/30 19:05:36 INFO common.AbstractHadoopJob: Ready to load KylinConfig from 
uri: 
kylin_metadata@hdfs,path=hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a
18/01/30 19:05:36 INFO cube.CubeManager: Initializing CubeManager with config 
kylin_metadata@hdfs,path=hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a
18/01/30 19:05:36 INFO persistence.ResourceStore: Using metadata url 
kylin_metadata@hdfs,path=hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a
 for resource store
18/01/30 19:05:36 INFO hdfs.HDFSResourceStore: hdfs meta path : 
hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a
18/01/30 19:05:36 INFO cube.CubeManager: Loading Cube from folder 
hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a/cube
18/01/30 19:05:37 INFO cube.CubeDescManager: Initializing CubeDescManager with 
config 
kylin_metadata@hdfs,path=hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a
18/01/30 19:05:37 INFO cube.CubeDescManager: Reloading Cube Metadata from 
folder 
hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a/cube_desc
18/01/30 19:05:37 INFO project.ProjectManager: Initializing ProjectManager with 
metadata url 
kylin_metadata@hdfs,path=hdfs://192.168.1.171:9000/kylin/kylin_metadata/metadata/c3468331-ab55-4368-b691-bf08e2b5fd2a
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: Checking custom measure 
types from kylin config
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: registering 
COUNT_DISTINCT(hllc), class 
org.apache.kylin.measure.hllc.HLLCMeasureType$Factory
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: registering 
COUNT_DISTINCT(bitmap), class 
org.apache.kylin.measure.bitmap.BitmapMeasureType$Factory
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: registering TOP_N(topn), 
class org.apache.kylin.measure.topn.TopNMeasureType$Factory
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: registering RAW(raw), class 
org.apache.kylin.measure.raw.RawMeasureType$Factory
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: registering 
EXTENDED_COLUMN(extendedcolumn), class 
org.apache.kylin.measure.extendedcolumn.ExtendedColumnMeasureType$Factory
18/01/30 19:05:37 INFO measure.MeasureTypeFactory: registering 
PERCENTILE(percentile), class 
org.apache.kylin.measure.percentile.PercentileMeasureType$Factory
18/01/30 19:05:37 INFO metadata.MetadataManager: Reloading data model at 
/model_desc/kylin_sales_model.json
18/01/30 19:05:37 INFO cube.CubeDescManager: Loaded 1 Cube(s)
18/01/30 19:05:37 INFO cube.CubeManager: Reloaded cube kylin_sales_cube_clone 
being CUBE[name=kylin_sales_cube_clone] having 2 segments
18/01/30 19:05:37 INFO cube.CubeManager: Loaded 1 cubes, fail on 0 cubes
18/01/30 19:05:37 INFO spark.SparkCubingByLayer: RDD Output path: 
hdfs://192.168.1.171:9000/kylin/kylin_metadata/kylin-5b2617ee-90ec-4c4b-86ff-699d0f14223e/kylin_sales_cube_clone/cuboid/
18/01/30 19:05:37 INFO spark.SparkCubingByLayer: All measure are normal (agg on 
all cuboids) ? : true
18/01/30 19:05:38 INFO internal.SharedState: Warehouse path is 
'file:/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/container_1516010594436_0185_02_000001/spark-warehouse'.
18/01/30 19:05:38 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@75fae875{/SQL,null,AVAILABLE,@Spark}
18/01/30 19:05:38 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@3785cc60{/SQL/json,null,AVAILABLE,@Spark}
18/01/30 19:05:38 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@6299f0a5{/SQL/execution,null,AVAILABLE,@Spark}
18/01/30 19:05:38 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@578d227{/SQL/execution/json,null,AVAILABLE,@Spark}
18/01/30 19:05:38 INFO handler.ContextHandler: Started 
o.s.j.s.ServletContextHandler@29f2a18{/static/sql,null,AVAILABLE,@Spark}
18/01/30 19:05:38 INFO hive.HiveUtils: Initializing HiveMetastoreConnection 
version 1.2.1 using Spark classes.
18/01/30 19:05:38 INFO metastore.HiveMetaStore: 0: Opening raw store with 
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/01/30 19:05:39 INFO metastore.ObjectStore: ObjectStore, initialize called
18/01/30 19:05:39 INFO DataNucleus.Persistence: Property 
hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/01/30 19:05:39 INFO DataNucleus.Persistence: Property 
datanucleus.cache.level2 unknown - will be ignored
18/01/30 19:05:45 INFO metastore.ObjectStore: Setting MetaStore object pin 
classes with 
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/01/30 19:05:47 INFO DataNucleus.Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
18/01/30 19:05:47 INFO DataNucleus.Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
18/01/30 19:05:51 INFO DataNucleus.Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
18/01/30 19:05:51 INFO DataNucleus.Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
18/01/30 19:05:52 INFO metastore.MetaStoreDirectSql: Using direct SQL, 
underlying DB is DERBY
18/01/30 19:05:52 INFO metastore.ObjectStore: Initialized ObjectStore
18/01/30 19:05:52 WARN metastore.ObjectStore: Version information not found in 
metastore. hive.metastore.schema.verification is not enabled so recording the 
schema version 1.2.0
18/01/30 19:05:52 WARN metastore.ObjectStore: Failed to get database default, 
returning NoSuchObjectException
18/01/30 19:05:53 INFO metastore.HiveMetaStore: Added admin role in metastore
18/01/30 19:05:53 INFO metastore.HiveMetaStore: Added public role in metastore
18/01/30 19:05:53 INFO metastore.HiveMetaStore: No user is added in admin role, 
since config is empty
18/01/30 19:05:53 INFO metastore.HiveMetaStore: 0: get_all_databases
18/01/30 19:05:53 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr      
cmd=get_all_databases   
18/01/30 19:05:53 INFO metastore.HiveMetaStore: 0: get_functions: db=default 
pat=*
18/01/30 19:05:53 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr      
cmd=get_functions: db=default pat=*     
18/01/30 19:05:53 INFO DataNucleus.Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
"embedded-only" so does not have its own datastore table.
18/01/30 19:05:54 INFO session.SessionState: Created local directory: 
/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/container_1516010594436_0185_02_000001/tmp/hadoop
18/01/30 19:05:54 INFO session.SessionState: Created local directory: 
/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/container_1516010594436_0185_02_000001/tmp/4990ced4-7b5c-46d6-a0f6-9ecfdcbc22c5_resources
18/01/30 19:05:54 INFO session.SessionState: Created HDFS directory: 
/tmp/hive/hadoop/4990ced4-7b5c-46d6-a0f6-9ecfdcbc22c5
18/01/30 19:05:54 INFO session.SessionState: Created local directory: 
/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/container_1516010594436_0185_02_000001/tmp/hadoop/4990ced4-7b5c-46d6-a0f6-9ecfdcbc22c5
18/01/30 19:05:54 INFO session.SessionState: Created HDFS directory: 
/tmp/hive/hadoop/4990ced4-7b5c-46d6-a0f6-9ecfdcbc22c5/_tmp_space.db
18/01/30 19:05:54 INFO client.HiveClientImpl: Warehouse location for Hive 
client (version 1.2.1) is 
file:/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/container_1516010594436_0185_02_000001/spark-warehouse
18/01/30 19:05:54 INFO metastore.HiveMetaStore: 0: get_database: default
18/01/30 19:05:54 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr      
cmd=get_database: default       
18/01/30 19:05:54 INFO metastore.HiveMetaStore: 0: get_database: global_temp
18/01/30 19:05:54 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr      
cmd=get_database: global_temp   
18/01/30 19:05:54 WARN metastore.ObjectStore: Failed to get database 
global_temp, returning NoSuchObjectException
18/01/30 19:05:54 INFO execution.SparkSqlParser: Parsing command: 
default.kylin_intermediate_kylin_sales_cube_clone_c3468331_ab55_4368_b691_bf08e2b5fd2a
18/01/30 19:05:55 INFO metastore.HiveMetaStore: 0: get_table : db=default 
tbl=kylin_intermediate_kylin_sales_cube_clone_c3468331_ab55_4368_b691_bf08e2b5fd2a
18/01/30 19:05:55 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr      
cmd=get_table : db=default 
tbl=kylin_intermediate_kylin_sales_cube_clone_c3468331_ab55_4368_b691_bf08e2b5fd2a
   
18/01/30 19:05:55 ERROR yarn.ApplicationMaster: User class threw exception: 
java.lang.RuntimeException: error execute 
org.apache.kylin.engine.spark.SparkCubingByLayer
java.lang.RuntimeException: error execute 
org.apache.kylin.engine.spark.SparkCubingByLayer
        at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
        at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:636)
Caused by: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table 
or view 
'kylin_intermediate_kylin_sales_cube_clone_c3468331_ab55_4368_b691_bf08e2b5fd2a'
 not found in database 'default';
        at 
org.apache.spark.sql.hive.client.HiveClient$$anonfun$getTable$1.apply(HiveClient.scala:74)
        at 
org.apache.spark.sql.hive.client.HiveClient$$anonfun$getTable$1.apply(HiveClient.scala:74)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.hive.client.HiveClient$class.getTable(HiveClient.scala:74)
        at 
org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:78)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable$1.apply(HiveExternalCatalog.scala:118)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$getRawTable(HiveExternalCatalog.scala:117)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:628)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:628)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
        at 
org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:627)
        at 
org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:124)
        at 
org.apache.spark.sql.hive.HiveSessionCatalog.lookupRelation(HiveSessionCatalog.scala:70)
        at org.apache.spark.sql.SparkSession.table(SparkSession.scala:586)
        at org.apache.spark.sql.SparkSession.table(SparkSession.scala:582)
        at org.apache.spark.sql.SQLContext.table(SQLContext.scala:708)
        at 
org.apache.kylin.engine.spark.SparkCubingByLayer.execute(SparkCubingByLayer.java:167)
        at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
        ... 6 more
18/01/30 19:05:55 INFO yarn.ApplicationMaster: Final app status: FAILED, 
exitCode: 15, (reason: User class threw exception: java.lang.RuntimeException: 
error execute org.apache.kylin.engine.spark.SparkCubingByLayer)
18/01/30 19:05:55 INFO spark.SparkContext: Invoking stop() from shutdown hook
18/01/30 19:05:55 INFO server.ServerConnector: Stopped 
Spark@36b4f4c5{HTTP/1.1}{0.0.0.0:0}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@580d6fd4{/stages/stage/kill,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@6355a0a0{/jobs/job/kill,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@6de87c4c{/api,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@4bb1ed6b{/,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@d66c0ae{/static,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@3de67ab3{/executors/threadDump/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@14c64726{/executors/threadDump,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@1afb9d54{/executors/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@7bd97d3c{/executors,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@6af87e2b{/environment/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@4215247{/environment,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@53e654bf{/storage/rdd/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@2d479ee6{/storage/rdd,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@a2c9363{/storage/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@71f4dc98{/storage,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@1e1c0871{/stages/pool/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@41ccc000{/stages/pool,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@6d72da27{/stages/stage/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@4b84807a{/stages/stage,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@44336797{/stages/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@7f3a751c{/stages,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@6dc9732e{/jobs/job/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@201b73c{/jobs/job,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@5a8c27d3{/jobs/json,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO handler.ContextHandler: Stopped 
o.s.j.s.ServletContextHandler@1727271{/jobs,null,UNAVAILABLE,@Spark}
18/01/30 19:05:55 INFO ui.SparkUI: Stopped Spark web UI at 
http://192.168.1.184:37939
18/01/30 19:05:55 INFO yarn.YarnAllocator: Driver requested a total number of 0 
executor(s).
18/01/30 19:05:55 INFO cluster.YarnClusterSchedulerBackend: Shutting down all 
executors
18/01/30 19:05:55 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking 
each executor to shut down
18/01/30 19:05:55 INFO cluster.SchedulerExtensionServices: Stopping 
SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
18/01/30 19:05:55 INFO spark.MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
18/01/30 19:05:55 INFO memory.MemoryStore: MemoryStore cleared
18/01/30 19:05:55 INFO storage.BlockManager: BlockManager stopped
18/01/30 19:05:55 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/01/30 19:05:55 INFO 
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
18/01/30 19:05:55 INFO spark.SparkContext: Successfully stopped SparkContext
18/01/30 19:05:55 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster 
with FAILED (diag message: User class threw exception: 
java.lang.RuntimeException: error execute 
org.apache.kylin.engine.spark.SparkCubingByLayer)
18/01/30 19:05:55 INFO impl.AMRMClientImpl: Waiting for application to be 
successfully unregistered.
18/01/30 19:05:55 INFO yarn.ApplicationMaster: Deleting staging directory 
hdfs://192.168.1.171:9000/user/hadoop/.sparkStaging/application_1516010594436_0185
18/01/30 19:05:55 INFO util.ShutdownHookManager: Shutdown hook called
18/01/30 19:05:55 INFO util.ShutdownHookManager: Deleting directory 
/home/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1516010594436_0185/spark-502e0062-d08e-47a0-9663-4b284f141d3f

Reply via email to