bunenghulai opened a new issue, #3463:
URL: https://github.com/apache/incubator-seatunnel/issues/3463

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   1. Use version 2.1.2 of the V1 Seatunnel
   2. Run the Seatunnel script in spark local mode
   3. Error reporting
   
   ### SeaTunnel Version
   
   Seatunnel 2.1.2
   
   ### SeaTunnel Config
   
   ```conf
   env {
       spark.app.name='ods_detail_full'
       spark.driver.memory=8G
       spark.executor.cores=4
       spark.executor.memory=8G
       spark.sql.adaptive.enabled=true
       spark.sql.adaptive.shuffle.targetPostShuffleInputSize=134217728
   }
   
   source {
     jdbc {
       driver = "com.mysql.jdbc.Driver"
       url = 
"jdbc:mysql://XXXXXXX/ent?tinyInt1isBit=false&zeroDateTimeBehavior=convertToNull"
       table = "detail"
       result_table_name = "hive_detail"
       user = "XXXX"
       password = "XXXXX"
           }
   }
   
   transform {
   
   }
   sink {
     Hive {
    sql = "insert overwrite table ods.ods_detail_full partition(dt='"${dt}"')  
select id, uid,  business_type,create_time, update_time, is_send from 
hive_detail "
   
   }
   }
   ```
   
   
   ### Running Command
   
   ```shell
   ../../bin/start-seatunnel-spark.sh --master local[4] --deploy-mode client -i 
dt=2022-11-16 --config ./ods_detail_full.conf
   ```
   
   
   ### Error Exception
   
   ```log
   Driver stacktrace:
           at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
           at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
           at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
           at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
           at scala.Option.foreach(Option.scala:407)
           at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
           at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:752)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2093)
           at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:195)
           ... 38 more
   Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at com.mysql.jdbc.By
   ```
   
   
   ### Flink or Spark Version
   
   Spark 3.0.0
   
   ### Java or Scala Version
   
   java 1.8
   Scala 2.12
   
   ### Screenshots
   
   22/11/18 06:07:36 INFO SparkContext: Invoking stop() from shutdown hook
   22/11/18 06:07:36 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
hadoop-offline-dms-cluster-worker-01, executor driver): 
java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   
   22/11/18 06:07:36 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; 
aborting job
   22/11/18 06:07:36 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks 
have all completed, from pool
   22/11/18 06:07:36 INFO SparkUI: Stopped Spark web UI at 
http://hadoop-offline-dms-cluster-worker-01:4040
   22/11/18 06:07:36 INFO TaskSchedulerImpl: Cancelling stage 0
   22/11/18 06:07:36 INFO TaskSchedulerImpl: Killing all running tasks in stage 
0: Stage cancelled
   22/11/18 06:07:36 INFO DAGScheduler: ResultStage 0 (sql at Hive.scala:43) 
failed in 179.619 s due to Job aborted due to stage failure: Task 0 in stage 
0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, 
hadoop-offline-dms-cluster-worker-01, executor driver): 
java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   
   Driver stacktrace:
   22/11/18 06:07:36 INFO DAGScheduler: Job 0 failed: sql at Hive.scala:43, 
took 179.648999 s
   22/11/18 06:07:36 ERROR FileFormatWriter: Aborting job 
35bfbc25-115c-4b33-9add-84b32d9c5eb3.
   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 
0, hadoop-offline-dms-cluster-worker-01, executor driver): 
java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   
   Driver stacktrace:
           at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
           at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
           at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
           at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
           at scala.Option.foreach(Option.scala:407)
           at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
           at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:752)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2093)
           at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:195)
           at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
           at 
org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
           at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3614)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:606)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:601)
           at org.apache.seatunnel.spark.hive.sink.Hive.output(Hive.scala:43)
           at org.apache.seatunnel.spark.hive.sink.Hive.output(Hive.scala:29)
           at 
org.apache.seatunnel.spark.SparkEnvironment.sinkProcess(SparkEnvironment.java:179)
           at 
org.apache.seatunnel.spark.batch.SparkBatchExecution.start(SparkBatchExecution.java:54)
           at 
org.apache.seatunnel.core.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:76)
           at org.apache.seatunnel.core.base.Seatunnel.run(Seatunnel.java:39)
           at 
org.apache.seatunnel.core.spark.SeatunnelSpark.main(SeatunnelSpark.java:32)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   22/11/18 06:07:36 ERROR Seatunnel:
   
   
===============================================================================
   
   
   22/11/18 06:07:36 ERROR Seatunnel: Fatal Error,
   
   22/11/18 06:07:36 ERROR Seatunnel: Please submit bug report in 
https://github.com/apache/incubator-seatunnel/issues
   
   22/11/18 06:07:36 ERROR Seatunnel: Reason:Execute Spark task error
   
   22/11/18 06:07:36 ERROR Seatunnel: Exception 
StackTrace:java.lang.RuntimeException: Execute Spark task error
           at 
org.apache.seatunnel.core.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:79)
           at org.apache.seatunnel.core.base.Seatunnel.run(Seatunnel.java:39)
           at 
org.apache.seatunnel.core.spark.SeatunnelSpark.main(SeatunnelSpark.java:32)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: org.apache.spark.SparkException: Job aborted.
           at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:226)
           at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
           at 
org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
           at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3614)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:606)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:601)
           at org.apache.seatunnel.spark.hive.sink.Hive.output(Hive.scala:43)
           at org.apache.seatunnel.spark.hive.sink.Hive.output(Hive.scala:29)
           at 
org.apache.seatunnel.spark.SparkEnvironment.sinkProcess(SparkEnvironment.java:179)
           at 
org.apache.seatunnel.spark.batch.SparkBatchExecution.start(SparkBatchExecution.java:54)
           at 
org.apache.seatunnel.core.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:76)
           ... 14 more
   Caused by: org.apache.spark.SparkException: Job aborted due to stage 
failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 
in stage 0.0 (TID 0, hadoop-offline-dms-cluster-worker-01, executor driver): 
java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   
   Driver stacktrace:
           at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
           at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
           at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
           at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
           at scala.Option.foreach(Option.scala:407)
           at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
           at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:752)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2093)
           at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:195)
           ... 38 more
   Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   
   22/11/18 06:07:36 ERROR Seatunnel:
   
===============================================================================
   
   
   
   Exception in thread "main" java.lang.RuntimeException: Execute Spark task 
error
           at 
org.apache.seatunnel.core.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:79)
           at org.apache.seatunnel.core.base.Seatunnel.run(Seatunnel.java:39)
           at 
org.apache.seatunnel.core.spark.SeatunnelSpark.main(SeatunnelSpark.java:32)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   Caused by: org.apache.spark.SparkException: Job aborted.
           at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:226)
           at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
           at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
           at 
org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
           at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3614)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:606)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:601)
           at org.apache.seatunnel.spark.hive.sink.Hive.output(Hive.scala:43)
           at org.apache.seatunnel.spark.hive.sink.Hive.output(Hive.scala:29)
           at 
org.apache.seatunnel.spark.SparkEnvironment.sinkProcess(SparkEnvironment.java:179)
           at 
org.apache.seatunnel.spark.batch.SparkBatchExecution.start(SparkBatchExecution.java:54)
           at 
org.apache.seatunnel.core.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:76)
           ... 14 more
   Caused by: org.apache.spark.SparkException: Job aborted due to stage 
failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 
in stage 0.0 (TID 0, hadoop-offline-dms-cluster-worker-01, executor driver): 
java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   
   Driver stacktrace:
           at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
           at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
           at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
           at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
           at scala.Option.foreach(Option.scala:407)
           at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
           at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
           at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
           at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:752)
           at org.apache.spark.SparkContext.runJob(SparkContext.scala:2093)
           at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:195)
           ... 38 more
   Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
           at com.mysql.jdbc.TimeUtil.fastTimestampCreate(TimeUtil.java:1166)
           at 
com.mysql.jdbc.ResultSetImpl.fastTimestampCreate(ResultSetImpl.java:1079)
           at 
com.mysql.jdbc.ResultSetRow.getTimestampFast(ResultSetRow.java:1393)
           at 
com.mysql.jdbc.ByteArrayRow.getTimestampFast(ByteArrayRow.java:127)
           at 
com.mysql.jdbc.ResultSetImpl.getTimestampInternal(ResultSetImpl.java:6592)
           at com.mysql.jdbc.ResultSetImpl.getTimestamp(ResultSetImpl.java:6192)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13(JdbcUtils.scala:457)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$13$adapted(JdbcUtils.scala:456)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$$Lambda$2374/1818572994.apply(Unknown
 Source)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
           at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
           at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
           at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
           at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
           at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
           at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
           at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
           at 
org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:225)
           at 
org.apache.spark.sql.execution.SortExec.$anonfun$doExecute$1(SortExec.scala:119)
           at 
org.apache.spark.sql.execution.SortExec$$Lambda$2364/1209672886.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
           at org.apache.spark.rdd.RDD$$Lambda$2365/1791512261.apply(Unknown 
Source)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:127)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
           at 
org.apache.spark.executor.Executor$TaskRunner$$Lambda$2328/1482998604.apply(Unknown
 Source)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
   22/11/18 06:07:36 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
   22/11/18 06:07:36 INFO MemoryStore: MemoryStore cleared
   22/11/18 06:07:36 INFO BlockManager: BlockManager stopped
   22/11/18 06:07:36 INFO BlockManagerMaster: BlockManagerMaster stopped
   22/11/18 06:07:36 INFO 
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
   22/11/18 06:07:36 INFO SparkContext: Successfully stopped SparkContext
   22/11/18 06:07:36 INFO ShutdownHookManager: Shutdown hook called
   22/11/18 06:07:36 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-bc6fbdf0-4cfd-4012-912f-f896e5c11721
   22/11/18 06:07:36 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-6603e45b-d788-4a40-bddf-b46820083fd2
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to