[ 
https://issues.apache.org/jira/browse/KYLIN-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17041803#comment-17041803
 ] 

Xiaoxiang Yu commented on KYLIN-4384:
-------------------------------------

{code:java}
org.apache.kylin.engine.spark.exception.SparkException: OS command error exit 
with return code: 1, error message: 20/02/21 08:56:16 WARN SparkConf: The 
configuration key 'spark.yarn.executor.memoryOverhead' has been deprecated as 
of Spark 2.3 and may be removed in the future. Please use the new key 
'spark.executor.memoryOverhead' 
instead.org.apache.kylin.engine.spark.exception.SparkException: OS command 
error exit with return code: 1, error message: 20/02/21 08:56:16 WARN 
SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has been 
deprecated as of Spark 2.3 and may be removed in the future. Please use the new 
key 'spark.executor.memoryOverhead' instead.SparkEntry args:-className 
org.apache.kylin.engine.spark.SparkCreatingFlatTable -base64EncodedSql2 
Q1JFQVRFIEVYVEVSTkFMIFRBQkxFIElGIE5PVCBFWElTVFMga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmUKKApgTElORU9SREVSX0xPX09SREVSS0VZYCBiaWdpbnQKLGBMSU5FT1JERVJfTE9fQ1VTVEtFWWAgaW50CixgTElORU9SREVSX0xPX1BBUlRLRVlgIGludAopClNUT1JFRCBBUyBTRVFVRU5DRUZJTEUKTE9DQVRJT04gJ3MzOi8veGlhb3hpYW5nLXl1L2t5bGluL2t5bGluX21ldGFkYXRhX2ZhaWxvdmVyL2t5bGluLTg2MTFjOTYyLTVhMmUtNWEyZi0yNTFlLTBjN2RjNzRhNmIwOS9reWxpbl9pbnRlcm1lZGlhdGVfc3NiX3NwYXJrXzEyODUwMDY0XzQ3NThfZTA3Yl80ZTA4XzRkOWZmNTU5ZWRiZSc=
 -base64EncodedSql1 
RFJPUCBUQUJMRSBJRiBFWElTVFMga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmU7Cg==
 -base64EncodedSql0 VVNFIGRlZmF1bHQ7Cg== -base64StepName 
Q3JlYXRlIEludGVybWVkaWF0ZSBGbGF0IFRhYmxlIFdpdGggU3Bhcms= -sqlCount 5 
-base64EncodedSql4 
SU5TRVJUIE9WRVJXUklURSBUQUJMRSBga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmVgIFNFTEVDVApgTElORU9SREVSYC5gTE9fT1JERVJLRVlgIGFzIGBMSU5FT1JERVJfTE9fT1JERVJLRVlgCixgTElORU9SREVSYC5gTE9fQ1VTVEtFWWAgYXMgYExJTkVPUkRFUl9MT19DVVNUS0VZYAosYExJTkVPUkRFUmAuYExPX1BBUlRLRVlgIGFzIGBMSU5FT1JERVJfTE9fUEFSVEtFWWAKIEZST00gYFJFQUxUSU1FX0xBTUJEQV9URVNUYC5gTElORU9SREVSYCBhcyBgTElORU9SREVSYApMRUZUIEpPSU4gYFJFQUxUSU1FX0xBTUJEQV9URVNUYC5gQ1VTVE9NRVJgIGFzIGBDVVNUT01FUmAKT04gYExJTkVPUkRFUmAuYExPX0NVU1RLRVlgID0gYENVU1RPTUVSYC5gQ19DVVNUS0VZYApXSEVSRSAxPTEgQU5EIChgTElORU9SREVSYC5MT19PUkRFUkRBVEUgPj0gMTk5MjAxMDEgQU5EIGBMSU5FT1JERVJgLkxPX09SREVSREFURSA8IDE5OTQwMTAxKQo7Cg==
 -segmentid 19920101000000_19940101000000 -base64EncodedSql3 
CkFMVEVSIFRBQkxFIGt5bGluX2ludGVybWVkaWF0ZV9zc2Jfc3BhcmtfMTI4NTAwNjRfNDc1OF9lMDdiXzRlMDhfNGQ5ZmY1NTllZGJlIFNFVCBUQkxQUk9QRVJUSUVTKCdhdXRvLnB1cmdlJz0ndHJ1ZScp
 -cubename SSB_SPARKRunning 
org.apache.kylin.engine.spark.SparkCreatingFlatTable -base64EncodedSql2 
Q1JFQVRFIEVYVEVSTkFMIFRBQkxFIElGIE5PVCBFWElTVFMga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmUKKApgTElORU9SREVSX0xPX09SREVSS0VZYCBiaWdpbnQKLGBMSU5FT1JERVJfTE9fQ1VTVEtFWWAgaW50CixgTElORU9SREVSX0xPX1BBUlRLRVlgIGludAopClNUT1JFRCBBUyBTRVFVRU5DRUZJTEUKTE9DQVRJT04gJ3MzOi8veGlhb3hpYW5nLXl1L2t5bGluL2t5bGluX21ldGFkYXRhX2ZhaWxvdmVyL2t5bGluLTg2MTFjOTYyLTVhMmUtNWEyZi0yNTFlLTBjN2RjNzRhNmIwOS9reWxpbl9pbnRlcm1lZGlhdGVfc3NiX3NwYXJrXzEyODUwMDY0XzQ3NThfZTA3Yl80ZTA4XzRkOWZmNTU5ZWRiZSc=
 -base64EncodedSql1 
RFJPUCBUQUJMRSBJRiBFWElTVFMga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmU7Cg==
 -base64EncodedSql0 VVNFIGRlZmF1bHQ7Cg== -base64StepName 
Q3JlYXRlIEludGVybWVkaWF0ZSBGbGF0IFRhYmxlIFdpdGggU3Bhcms= -sqlCount 5 
-base64EncodedSql4 
SU5TRVJUIE9WRVJXUklURSBUQUJMRSBga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmVgIFNFTEVDVApgTElORU9SREVSYC5gTE9fT1JERVJLRVlgIGFzIGBMSU5FT1JERVJfTE9fT1JERVJLRVlgCixgTElORU9SREVSYC5gTE9fQ1VTVEtFWWAgYXMgYExJTkVPUkRFUl9MT19DVVNUS0VZYAosYExJTkVPUkRFUmAuYExPX1BBUlRLRVlgIGFzIGBMSU5FT1JERVJfTE9fUEFSVEtFWWAKIEZST00gYFJFQUxUSU1FX0xBTUJEQV9URVNUYC5gTElORU9SREVSYCBhcyBgTElORU9SREVSYApMRUZUIEpPSU4gYFJFQUxUSU1FX0xBTUJEQV9URVNUYC5gQ1VTVE9NRVJgIGFzIGBDVVNUT01FUmAKT04gYExJTkVPUkRFUmAuYExPX0NVU1RLRVlgID0gYENVU1RPTUVSYC5gQ19DVVNUS0VZYApXSEVSRSAxPTEgQU5EIChgTElORU9SREVSYC5MT19PUkRFUkRBVEUgPj0gMTk5MjAxMDEgQU5EIGBMSU5FT1JERVJgLkxPX09SREVSREFURSA8IDE5OTQwMTAxKQo7Cg==
 -segmentid 19920101000000_19940101000000 -base64EncodedSql3 
CkFMVEVSIFRBQkxFIGt5bGluX2ludGVybWVkaWF0ZV9zc2Jfc3BhcmtfMTI4NTAwNjRfNDc1OF9lMDdiXzRlMDhfNGQ5ZmY1NTllZGJlIFNFVCBUQkxQUk9QRVJUSUVTKCdhdXRvLnB1cmdlJz0ndHJ1ZScp
 -cubename SSB_SPARK20/02/21 08:56:17 INFO SparkSqlBatch: start execute sql 
batch job, cubeName: SSB_SPARK, stepName: Create Intermediate Flat Table With 
Spark, segmentId: 19920101000000_19940101000000, sqlCount: 520/02/21 08:56:17 
WARN SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has 
been deprecated as of Spark 2.3 and may be removed in the future. Please use 
the new key 'spark.executor.memoryOverhead' instead.20/02/21 08:56:17 INFO 
SparkContext: Running Spark version 2.4.420/02/21 08:56:17 INFO SparkContext: 
Submitted application: Create Intermediate Flat Table With Spark for cube: 
SSB_SPARK, segment 19920101000000_1994010100000020/02/21 08:56:17 INFO 
SecurityManager: Changing view acls to: root20/02/21 08:56:17 INFO 
SecurityManager: Changing modify acls to: root20/02/21 08:56:17 INFO 
SecurityManager: Changing view acls groups to: 20/02/21 08:56:17 INFO 
SecurityManager: Changing modify acls groups to: 20/02/21 08:56:17 INFO 
SecurityManager: SecurityManager: authentication disabled; ui acls disabled; 
users  with view permissions: Set(root); groups with view permissions: Set(); 
users  with modify permissions: Set(root); groups with modify permissions: 
Set()20/02/21 08:56:18 INFO Utils: Successfully started service 'sparkDriver' 
on port 33295.20/02/21 08:56:18 INFO SparkEnv: Registering 
MapOutputTracker20/02/21 08:56:18 INFO SparkEnv: Registering 
BlockManagerMaster20/02/21 08:56:18 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology 
information20/02/21 08:56:18 INFO BlockManagerMasterEndpoint: 
BlockManagerMasterEndpoint up20/02/21 08:56:18 INFO DiskBlockManager: Created 
local directory at 
/mnt/tmp/blockmgr-fb61ddff-445a-44cb-96ef-60fa7d0e754d20/02/21 08:56:18 INFO 
MemoryStore: MemoryStore started with capacity 1028.8 MB20/02/21 08:56:18 INFO 
SparkEnv: Registering OutputCommitCoordinator20/02/21 08:56:18 INFO Utils: 
Successfully started service 'SparkUI' on port 4040.20/02/21 08:56:18 INFO 
SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://ip-172-31-10-209.cn-northwest-1.compute.internal:404020/02/21 08:56:18 
INFO SparkContext: Added JAR 
file:/home/ec2-user/apache-kylin-3.1.0-SNAPSHOT-bin/lib/kylin-job-3.1.0-SNAPSHOT.jar
 at 
spark://ip-172-31-10-209.cn-northwest-1.compute.internal:33295/jars/kylin-job-3.1.0-SNAPSHOT.jar
 with timestamp 158227537881020/02/21 08:56:18 INFO Utils: Using initial 
executors = 100, max of spark.dynamicAllocation.initialExecutors, 
spark.dynamicAllocation.minExecutors and spark.executor.instances20/02/21 
08:56:19 INFO RMProxy: Connecting to ResourceManager at 
ip-172-31-10-209.cn-northwest-1.compute.internal/172.31.10.209:803220/02/21 
08:56:19 INFO Client: Requesting a new application from cluster with 2 
NodeManagers20/02/21 08:56:19 INFO Client: Verifying our application has not 
requested more than the maximum memory capability of the cluster (12288 MB per 
container)20/02/21 08:56:19 INFO Client: Will allocate AM container, with 896 
MB memory including 384 MB overhead20/02/21 08:56:19 INFO Client: Setting up 
container launch context for our AM20/02/21 08:56:19 INFO Client: Setting up 
the launch environment for our AM container20/02/21 08:56:19 INFO Client: 
Preparing resources for our AM container20/02/21 08:56:22 INFO 
ClientConfigurationFactory: Set initial getObject socket timeout to 2000 
ms.20/02/21 08:56:22 INFO Client: Uploading resource 
s3://xiaoxiang-yu/package/kylin_binary_AWS_GLUE/spark-libs.jar -> 
hdfs://ip-172-31-10-209.cn-northwest-1.compute.internal:8020/user/root/.sparkStaging/application_1582272753196_0021/spark-libs.jar20/02/21
 08:56:32 INFO Client: Uploading resource 
file:/home/ec2-user/apache-kylin-3.1.0-SNAPSHOT-bin/lib/kylin-job-3.1.0-SNAPSHOT.jar
 -> 
hdfs://ip-172-31-10-209.cn-northwest-1.compute.internal:8020/user/root/.sparkStaging/application_1582272753196_0021/kylin-job-3.1.0-SNAPSHOT.jar20/02/21
 08:56:32 INFO Client: Uploading resource file:/etc/spark/conf/hive-site.xml -> 
hdfs://ip-172-31-10-209.cn-northwest-1.compute.internal:8020/user/root/.sparkStaging/application_1582272753196_0021/hive-site.xml20/02/21
 08:56:33 INFO Client: Uploading resource 
file:/mnt/tmp/spark-da9ec736-7705-4f6c-86d3-bd46bba4831a/__spark_conf__9191810147492544405.zip
 -> 
hdfs://ip-172-31-10-209.cn-northwest-1.compute.internal:8020/user/root/.sparkStaging/application_1582272753196_0021/__spark_conf__.zip20/02/21
 08:56:33 INFO SecurityManager: Changing view acls to: root20/02/21 08:56:33 
INFO SecurityManager: Changing modify acls to: root20/02/21 08:56:33 INFO 
SecurityManager: Changing view acls groups to: 20/02/21 08:56:33 INFO 
SecurityManager: Changing modify acls groups to: 20/02/21 08:56:33 INFO 
SecurityManager: SecurityManager: authentication disabled; ui acls disabled; 
users  with view permissions: Set(root); groups with view permissions: Set(); 
users  with modify permissions: Set(root); groups with modify permissions: 
Set()20/02/21 08:56:34 INFO Client: Submitting application 
application_1582272753196_0021 to ResourceManager20/02/21 08:56:34 INFO 
YarnClientImpl: Submitted application application_1582272753196_002120/02/21 
08:56:34 INFO SchedulerExtensionServices: Starting Yarn extension services with 
app application_1582272753196_0021 and attemptId None20/02/21 08:56:35 INFO 
Client: Application report for application_1582272753196_0021 (state: 
ACCEPTED)20/02/21 08:56:35 INFO Client:  client token: N/A diagnostics: AM 
container is launched, waiting for AM container to Register with RM 
ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start 
time: 1582275394156 final status: UNDEFINED tracking URL: 
http://ip-172-31-10-209.cn-northwest-1.compute.internal:20888/proxy/application_1582272753196_0021/
 user: root20/02/21 08:56:36 INFO Client: Application report for 
application_1582272753196_0021 (state: ACCEPTED)20/02/21 08:56:37 INFO Client: 
Application report for application_1582272753196_0021 (state: ACCEPTED)20/02/21 
08:56:38 INFO Client: Application report for application_1582272753196_0021 
(state: RUNNING)20/02/21 08:56:38 INFO Client:  client token: N/A diagnostics: 
N/A ApplicationMaster host: 172.31.12.148 ApplicationMaster RPC port: -1 queue: 
default start time: 1582275394156 final status: UNDEFINED tracking URL: 
http://ip-172-31-10-209.cn-northwest-1.compute.internal:20888/proxy/application_1582272753196_0021/
 user: root20/02/21 08:56:38 INFO YarnClientSchedulerBackend: Application 
application_1582272753196_0021 has started running.20/02/21 08:56:38 INFO 
Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 
45103.20/02/21 08:56:38 INFO NettyBlockTransferService: Server created on 
ip-172-31-10-209.cn-northwest-1.compute.internal:4510320/02/21 08:56:38 INFO 
BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for 
block replication policy20/02/21 08:56:38 INFO BlockManagerMaster: Registering 
BlockManager BlockManagerId(driver, 
ip-172-31-10-209.cn-northwest-1.compute.internal, 45103, None)20/02/21 08:56:38 
INFO BlockManagerMasterEndpoint: Registering block manager 
ip-172-31-10-209.cn-northwest-1.compute.internal:45103 with 1028.8 MB RAM, 
BlockManagerId(driver, ip-172-31-10-209.cn-northwest-1.compute.internal, 45103, 
None)20/02/21 08:56:38 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, ip-172-31-10-209.cn-northwest-1.compute.internal, 45103, 
None)20/02/21 08:56:38 INFO BlockManager: external shuffle service port = 
733720/02/21 08:56:38 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, ip-172-31-10-209.cn-northwest-1.compute.internal, 45103, 
None)20/02/21 08:56:38 INFO YarnClientSchedulerBackend: Add WebUI Filter. 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> 
ip-172-31-10-209.cn-northwest-1.compute.internal, PROXY_URI_BASES -> 
http://ip-172-31-10-209.cn-northwest-1.compute.internal:20888/proxy/application_1582272753196_0021),
 /proxy/application_1582272753196_002120/02/21 08:56:38 INFO JettyUtils: Adding 
filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, 
/jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, 
/stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, 
/storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, 
/executors/json, /executors/threadDump, /executors/threadDump/json, /static, /, 
/api, /jobs/job/kill, /stages/stage/kill.20/02/21 08:56:38 INFO JettyUtils: 
Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to 
/metrics/json.20/02/21 08:56:38 INFO 
YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as 
NettyRpcEndpointRef(spark-client://YarnAM)20/02/21 08:56:38 INFO 
EventLoggingListener: Logging events to 
s3://xiaoxiang-yu/kylin/spark-history/application_1582272753196_002120/02/21 
08:56:38 INFO Utils: Using initial executors = 100, max of 
spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors 
and spark.executor.instances20/02/21 08:56:38 INFO YarnClientSchedulerBackend: 
SchedulerBackend is ready for scheduling beginning after reached 
minRegisteredResourcesRatio: 0.020/02/21 08:56:38 INFO SparkSqlBatch: execute 
spark sql: USE default20/02/21 08:56:38 INFO SharedState: loading hive config 
file: file:/etc/spark/conf.dist/hive-site.xml20/02/21 08:56:38 INFO 
SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of 
spark.sql.warehouse.dir ('hdfs:///user/spark/warehouse').20/02/21 08:56:38 INFO 
SharedState: Warehouse path is 'hdfs:///user/spark/warehouse'.20/02/21 08:56:38 
INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL.20/02/21 
08:56:38 INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to 
/SQL/json.20/02/21 08:56:38 INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to 
/SQL/execution.20/02/21 08:56:38 INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to 
/SQL/execution/json.20/02/21 08:56:38 INFO JettyUtils: Adding filter 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to 
/static/sql.20/02/21 08:56:39 INFO StateStoreCoordinatorRef: Registered 
StateStoreCoordinator endpoint20/02/21 08:56:39 INFO HiveUtils: Initializing 
HiveMetastoreConnection version 1.2.1 using Spark classes.20/02/21 08:56:40 
WARN HiveConf: HiveConf of name hive.server2.thrift.url does not exist20/02/21 
08:56:40 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:40 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:40 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:41 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:41 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:41 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:42 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:42 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:42 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:42 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor 
NettyRpcEndpointRef(spark-client://Executor) (172.31.12.148:49636) with ID 
420/02/21 08:56:42 INFO ExecutorAllocationManager: New executor 4 has 
registered (new total is 1)20/02/21 08:56:42 INFO 
YarnSchedulerBackend$YarnDriverEndpoint: Registered executor 
NettyRpcEndpointRef(spark-client://Executor) (172.31.12.148:49634) with ID 
320/02/21 08:56:42 INFO ExecutorAllocationManager: New executor 3 has 
registered (new total is 2)20/02/21 08:56:43 INFO BlockManagerMasterEndpoint: 
Registering block manager 
ip-172-31-12-148.cn-northwest-1.compute.internal:44745 with 2.2 GB RAM, 
BlockManagerId(4, ip-172-31-12-148.cn-northwest-1.compute.internal, 44745, 
None)20/02/21 08:56:43 INFO BlockManagerMasterEndpoint: Registering block 
manager ip-172-31-12-148.cn-northwest-1.compute.internal:40235 with 2.2 GB RAM, 
BlockManagerId(3, ip-172-31-12-148.cn-northwest-1.compute.internal, 40235, 
None)20/02/21 08:56:43 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:43 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:43 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:43 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor 
NettyRpcEndpointRef(spark-client://Executor) (172.31.13.99:55410) with ID 
120/02/21 08:56:43 INFO ExecutorAllocationManager: New executor 1 has 
registered (new total is 3)20/02/21 08:56:43 INFO 
YarnSchedulerBackend$YarnDriverEndpoint: Registered executor 
NettyRpcEndpointRef(spark-client://Executor) (172.31.13.99:55412) with ID 
220/02/21 08:56:43 INFO ExecutorAllocationManager: New executor 2 has 
registered (new total is 4)20/02/21 08:56:43 INFO BlockManagerMasterEndpoint: 
Registering block manager ip-172-31-13-99.cn-northwest-1.compute.internal:40255 
with 2.2 GB RAM, BlockManagerId(2, 
ip-172-31-13-99.cn-northwest-1.compute.internal, 40255, None)20/02/21 08:56:43 
INFO BlockManagerMasterEndpoint: Registering block manager 
ip-172-31-13-99.cn-northwest-1.compute.internal:38989 with 2.2 GB RAM, 
BlockManagerId(1, ip-172-31-13-99.cn-northwest-1.compute.internal, 38989, 
None)20/02/21 08:56:44 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:44 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:44 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:45 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:45 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:45 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:46 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:46 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:46 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:47 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:47 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:47 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:48 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:48 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:48 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:49 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:49 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:49 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:50 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:50 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:50 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:51 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:51 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:51 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:52 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:52 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:52 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:53 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:53 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:53 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:54 INFO metastore: Trying to connect to metastore with URI 
thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 08:56:54 
WARN metastore: Failed to connect to the MetaStore Server...20/02/21 08:56:54 
INFO metastore: Waiting 1 seconds before next connection attempt.20/02/21 
08:56:55 WARN Hive: Failed to access metastore. This class should not accessed 
in runtime.org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1237) at 
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175) at 
org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:167) at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) at 
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:183)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:117)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:271)
 at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:384) 
at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:286) 
at 
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:141)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:136)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.setCurrentDatabase(SessionCatalog.scala:258)
 at 
org.apache.spark.sql.execution.command.SetDatabaseCommand.run(databases.scala:59)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370) at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369) at 
org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643) at 
org.apache.kylin.engine.spark.SparkSqlBatch.execute(SparkSqlBatch.java:113) at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
 at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
 at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928) at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937) at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:98)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClientFactory.createMetaStoreClient(SessionHiveMetaStoreClientFactory.java:42)
 at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3007) 
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3042) at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1235) ... 57 
moreCaused by: java.lang.reflect.InvocationTargetException at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
 ... 65 moreCaused by: MetaException(message:Could not connect to meta store 
using any of the URIs provided. Most recent failure: 
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 
拒绝连接 (Connection refused) at 
org.apache.thrift.transport.TSocket.open(TSocket.java:226) at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:98)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClientFactory.createMetaStoreClient(SessionHiveMetaStoreClientFactory.java:42)
 at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3007) 
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3042) at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1235) at 
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:175) at 
org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:167) at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) at 
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:183)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:117)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:271)
 at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:384) 
at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:286) 
at 
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:141)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:136)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.setCurrentDatabase(SessionCatalog.scala:258)
 at 
org.apache.spark.sql.execution.command.SetDatabaseCommand.run(databases.scala:59)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370) at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369) at 
org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643) at 
org.apache.kylin.engine.spark.SparkSqlBatch.execute(SparkSqlBatch.java:113) at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
 at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
 at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928) at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937) at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
java.net.ConnectException: 拒绝连接 (Connection refused) at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:607) at 
org.apache.thrift.transport.TSocket.open(TSocket.java:221) ... 73 more) at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:467)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
 ... 70 more20/02/21 08:56:55 INFO metastore: Trying to connect to metastore 
with URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:56:55 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:56:55 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:56:56 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:56:56 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:56:56 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:56:57 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:56:57 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:56:57 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:56:58 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:56:58 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:56:58 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:56:59 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:56:59 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:56:59 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:00 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:00 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:00 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:01 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:01 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:01 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:02 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:02 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:02 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:03 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:03 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:03 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:04 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:04 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:04 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:05 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:05 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:05 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:06 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:06 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:06 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:07 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:07 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:07 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:08 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:08 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:08 INFO metastore: Waiting 1 seconds before next connection 
attempt.20/02/21 08:57:09 INFO metastore: Trying to connect to metastore with 
URI thrift://ip-172-31-10-209.cn-northwest-1.compute.internal:908320/02/21 
08:57:09 WARN metastore: Failed to connect to the MetaStore Server...20/02/21 
08:57:09 INFO metastore: Waiting 1 seconds before next connection 
attempt.Exception in thread "main" java.lang.RuntimeException: error execute 
org.apache.kylin.engine.spark.SparkCreatingFlatTable. Root cause: 
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient; at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:42)
 at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
 at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928) at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937) at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient; at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:141)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:136)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.setCurrentDatabase(SessionCatalog.scala:258)
 at 
org.apache.spark.sql.execution.command.SetDatabaseCommand.run(databases.scala:59)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370) at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369) at 
org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643) at 
org.apache.kylin.engine.spark.SparkSqlBatch.execute(SparkSqlBatch.java:113) at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
 ... 13 moreCaused by: java.lang.RuntimeException: java.lang.RuntimeException: 
Unable to instantiate 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at 
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:183)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:117)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:271)
 at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:384) 
at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:286) 
at 
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
 ... 39 moreCaused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:98)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClientFactory.createMetaStoreClient(SessionHiveMetaStoreClientFactory.java:42)
 at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3007) 
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3042) at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) ... 
54 moreCaused by: java.lang.reflect.InvocationTargetException at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
 ... 62 moreCaused by: MetaException(message:Could not connect to meta store 
using any of the URIs provided. Most recent failure: 
org.apache.thrift.transport.TTransportException: java.net.ConnectException: 
拒绝连接 (Connection refused) at 
org.apache.thrift.transport.TSocket.open(TSocket.java:226) at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
 at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:98)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClientFactory.createMetaStoreClient(SessionHiveMetaStoreClientFactory.java:42)
 at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3007) 
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3042) at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) at 
org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:183)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:117)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:271)
 at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:384) 
at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:286) 
at 
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
 at 
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:141)
 at 
org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:136)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:55)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.setCurrentDatabase(SessionCatalog.scala:258)
 at 
org.apache.spark.sql.execution.command.SetDatabaseCommand.run(databases.scala:59)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370) at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369) at 
org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at 
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) at 
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643) at 
org.apache.kylin.engine.spark.SparkSqlBatch.execute(SparkSqlBatch.java:113) at 
org.apache.kylin.common.util.AbstractApplication.execute(AbstractApplication.java:37)
 at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.java:44) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
 at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) at 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928) at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937) at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
java.net.ConnectException: 拒绝连接 (Connection refused) at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:607) at 
org.apache.thrift.transport.TSocket.open(TSocket.java:221) ... 70 more) at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:467)
 at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
 at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
 ... 67 more20/02/21 08:57:10 INFO SparkContext: Invoking stop() from shutdown 
hook20/02/21 08:57:10 INFO SparkUI: Stopped Spark web UI at 
http://ip-172-31-10-209.cn-northwest-1.compute.internal:404020/02/21 08:57:10 
INFO YarnClientSchedulerBackend: Interrupting monitor thread20/02/21 08:57:10 
INFO YarnClientSchedulerBackend: Shutting down all executors20/02/21 08:57:10 
INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut 
down20/02/21 08:57:10 INFO SchedulerExtensionServices: Stopping 
SchedulerExtensionServices(serviceOption=None, services=List(), 
started=false)20/02/21 08:57:10 INFO YarnClientSchedulerBackend: 
Stopped20/02/21 08:57:10 INFO S3NativeFileSystem2: rename 
s3://xiaoxiang-yu/kylin/spark-history/application_1582272753196_0021.inprogress 
s3://xiaoxiang-yu/kylin/spark-history/application_1582272753196_002120/02/21 
08:57:10 ERROR Utils: Uncaught exception in thread 
pool-1-thread-1java.io.FileNotFoundException: No such file or directory: 
'kylin/spark-history/application_1582272753196_0021.inprogress' at 
com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatusFromS3CheckingConsistencyIfEnabled(ConsistencyCheckerS3FileSystem.java:521)
 at 
com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:423)
 at 
com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:416)
 at 
com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.rename(ConsistencyCheckerS3FileSystem.java:1020)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
 at com.sun.proxy.$Proxy41.rename(Unknown Source) at 
com.amazon.ws.emr.hadoop.fs.s3n2.S3NativeFileSystem2.rename(S3NativeFileSystem2.java:161)
 at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.rename(EmrFileSystem.java:326) at 
org.apache.spark.scheduler.EventLoggingListener.stop(EventLoggingListener.scala:258)
 at 
org.apache.spark.SparkContext$$anonfun$stop$8$$anonfun$apply$mcV$sp$6.apply(SparkContext.scala:1960)
 at 
org.apache.spark.SparkContext$$anonfun$stop$8$$anonfun$apply$mcV$sp$6.apply(SparkContext.scala:1960)
 at scala.Option.foreach(Option.scala:257) at 
org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1960)
 at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340) at 
org.apache.spark.SparkContext.stop(SparkContext.scala:1959) at 
org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:575) 
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216) 
at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
 at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945) at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
 at scala.util.Try$.apply(Try.scala:192) at 
org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)20/02/21 08:57:10 INFO 
MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!20/02/21 
08:57:10 INFO MemoryStore: MemoryStore cleared20/02/21 08:57:10 INFO 
BlockManager: BlockManager stopped20/02/21 08:57:10 INFO BlockManagerMaster: 
BlockManagerMaster stopped20/02/21 08:57:10 INFO 
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!20/02/21 08:57:10 INFO SparkContext: 
Successfully stopped SparkContext20/02/21 08:57:10 INFO ShutdownHookManager: 
Shutdown hook called20/02/21 08:57:10 INFO ShutdownHookManager: Deleting 
directory /mnt/tmp/spark-da9ec736-7705-4f6c-86d3-bd46bba4831a20/02/21 08:57:10 
INFO ShutdownHookManager: Deleting directory 
/mnt/tmp/spark-829322ea-07a7-4c3b-9b59-503a3dc2737bThe command is: export 
HADOOP_CONF_DIR=/etc/hadoop/conf && /usr/lib/spark/bin/spark-submit --class 
org.apache.kylin.common.util.SparkEntry --name "Create Intermediate Flat Table 
With Spark" --conf spark.executor.instances=40  --conf 
spark.yarn.archive=s3://xiaoxiang-yu/package/kylin_binary_AWS_GLUE/spark-libs.jar
  --conf spark.yarn.queue=default  --conf 
spark.history.fs.logDirectory=s3://xiaoxiang-yu/kylin/spark-history  --conf 
spark.master=yarn  --conf spark.hadoop.yarn.timeline-service.enabled=false  
--conf spark.executor.memory=4G  --conf spark.eventLog.enabled=true  --conf 
spark.eventLog.dir=s3://xiaoxiang-yu/kylin/spark-history  --conf 
spark.yarn.executor.memoryOverhead=1024  --conf spark.driver.memory=2G  --conf 
spark.shuffle.service.enabled=true  --conf 
spark.yarn.access.hadoopFileSystems=s3://xiaoxiang-yu --jars 
/home/ec2-user/apache-kylin-3.1.0-SNAPSHOT-bin/lib/kylin-job-3.1.0-SNAPSHOT.jar 
/home/ec2-user/apache-kylin-3.1.0-SNAPSHOT-bin/lib/kylin-job-3.1.0-SNAPSHOT.jar 
-className org.apache.kylin.engine.spark.SparkCreatingFlatTable 
-base64EncodedSql2 
Q1JFQVRFIEVYVEVSTkFMIFRBQkxFIElGIE5PVCBFWElTVFMga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmUKKApgTElORU9SREVSX0xPX09SREVSS0VZYCBiaWdpbnQKLGBMSU5FT1JERVJfTE9fQ1VTVEtFWWAgaW50CixgTElORU9SREVSX0xPX1BBUlRLRVlgIGludAopClNUT1JFRCBBUyBTRVFVRU5DRUZJTEUKTE9DQVRJT04gJ3MzOi8veGlhb3hpYW5nLXl1L2t5bGluL2t5bGluX21ldGFkYXRhX2ZhaWxvdmVyL2t5bGluLTg2MTFjOTYyLTVhMmUtNWEyZi0yNTFlLTBjN2RjNzRhNmIwOS9reWxpbl9pbnRlcm1lZGlhdGVfc3NiX3NwYXJrXzEyODUwMDY0XzQ3NThfZTA3Yl80ZTA4XzRkOWZmNTU5ZWRiZSc=
 -base64EncodedSql1 
RFJPUCBUQUJMRSBJRiBFWElTVFMga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmU7Cg==
 -base64EncodedSql0 VVNFIGRlZmF1bHQ7Cg== -base64StepName 
Q3JlYXRlIEludGVybWVkaWF0ZSBGbGF0IFRhYmxlIFdpdGggU3Bhcms= -sqlCount 5 
-base64EncodedSql4 
SU5TRVJUIE9WRVJXUklURSBUQUJMRSBga3lsaW5faW50ZXJtZWRpYXRlX3NzYl9zcGFya18xMjg1MDA2NF80NzU4X2UwN2JfNGUwOF80ZDlmZjU1OWVkYmVgIFNFTEVDVApgTElORU9SREVSYC5gTE9fT1JERVJLRVlgIGFzIGBMSU5FT1JERVJfTE9fT1JERVJLRVlgCixgTElORU9SREVSYC5gTE9fQ1VTVEtFWWAgYXMgYExJTkVPUkRFUl9MT19DVVNUS0VZYAosYExJTkVPUkRFUmAuYExPX1BBUlRLRVlgIGFzIGBMSU5FT1JERVJfTE9fUEFSVEtFWWAKIEZST00gYFJFQUxUSU1FX0xBTUJEQV9URVNUYC5gTElORU9SREVSYCBhcyBgTElORU9SREVSYApMRUZUIEpPSU4gYFJFQUxUSU1FX0xBTUJEQV9URVNUYC5gQ1VTVE9NRVJgIGFzIGBDVVNUT01FUmAKT04gYExJTkVPUkRFUmAuYExPX0NVU1RLRVlgID0gYENVU1RPTUVSYC5gQ19DVVNUS0VZYApXSEVSRSAxPTEgQU5EIChgTElORU9SREVSYC5MT19PUkRFUkRBVEUgPj0gMTk5MjAxMDEgQU5EIGBMSU5FT1JERVJgLkxPX09SREVSREFURSA8IDE5OTQwMTAxKQo7Cg==
 -segmentid 19920101000000_19940101000000 -base64EncodedSql3 
CkFMVEVSIFRBQkxFIGt5bGluX2ludGVybWVkaWF0ZV9zc2Jfc3BhcmtfMTI4NTAwNjRfNDc1OF9lMDdiXzRlMDhfNGQ5ZmY1NTllZGJlIFNFVCBUQkxQUk9QRVJUSUVTKCdhdXRvLnB1cmdlJz0ndHJ1ZScp
 -cubename SSB_SPARK at 
org.apache.kylin.engine.spark.SparkExecutable.doWork(SparkExecutable.java:393) 
at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:179)
 at 
org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
 at 
org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:179)
 at 
org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748){code}

> Spark build failed in EMR when KYLIN-4224 introduced
> ----------------------------------------------------
>
>                 Key: KYLIN-4384
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4384
>             Project: Kylin
>          Issue Type: Bug
>          Components: Spark Engine
>         Environment: AWS EMR 5.28 with Glue support
>            Reporter: Xiaoxiang Yu
>            Priority: Major
>              Labels: aws-emr, aws-glue
>         Attachments: image-2020-02-21-20-17-36-060.png, 
> image-2020-02-21-20-18-12-440.png
>
>
> I built a package using the latest code of master branch to test the Kylin 
> Glue integration. 
>  
> I create a EMR cluster(5.28) with glue enabled. In my test, MR build engine 
> works fine.  But spark build engine failed at the first step, and I found the 
> code change is introduced in https://issues.apache.org/jira/browse/KYLIN-4224.
>  
> The new added step change the way that how {color:#de350b}*intermediate 
> table*{color} was created. And it is  mandatory, so nobody can disable or 
> skip it without rebuild the binary. It is not good in my side. Currently, I 
> can not find the root cause. But I suggest we should add a config to make it 
> easy to disable that feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to