CheneyYin opened a new issue, #4640:
URL: https://github.com/apache/incubator-seatunnel/issues/4640

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/incubator-seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   Multiple Sinks blocked and never started in STREAMING mode. Spark Engine 
just start one sink.
   In my case, just one console print record.
   
   ### SeaTunnel Version
   
   2.3.1
   
   ### SeaTunnel Config
   
   ```conf
   env {
     job.mode = STREAMING
     job.name = "SeaTunnel"
     spark.master = "local[*]"
   }
   
   source {
     FakeSource {
       row.num = 1
       schema = {
         fields {
           name = "string"
         }
       }
       result_table_name = "fake0"
     }
   
     FakeSource {
         row.num = 1
         schema = {
           fields {
             age = "int"
           }
         }
         result_table_name = "fake1"
       }
   }
   
   sink {
     Console {
       source_table_name = "fake0"
     }
   
     Console {
         source_table_name = "fake1"
       }
   }
   ```
   
   
   ### Running Command
   
   ```shell
   ./bin/start-seatunnel-spark-3-connector-v2.sh --master local --deploy-mode 
client --config ./config/streaming.conf
   ```
   
   
   ### Error Exception
   
   ```log
   23/04/21 16:33:06 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir.
   23/04/21 16:33:06 INFO SharedState: Warehouse path is 
'file:/home/cheney/expr/seatunnel/apache-seatunnel-incubating-2.3.1/spark-warehouse'.
   23/04/21 16:33:09 INFO V2ScanRelationPushDown:
   Output: name#0
   
   23/04/21 16:33:10 INFO CodeGenerator: Code generated in 147.67237 ms
   23/04/21 16:33:10 INFO AppendDataExec: Start processing data source write 
support: 
org.apache.seatunnel.translation.spark.sink.SeaTunnelBatchWrite@34f2d3a6. The 
input RDD has 1 partitions.
   23/04/21 16:33:10 INFO SparkContext: Starting job: save at 
SinkExecuteProcessor.java:125
   23/04/21 16:33:10 INFO DAGScheduler: Got job 0 (save at 
SinkExecuteProcessor.java:125) with 1 output partitions
   23/04/21 16:33:10 INFO DAGScheduler: Final stage: ResultStage 0 (save at 
SinkExecuteProcessor.java:125)
   23/04/21 16:33:10 INFO DAGScheduler: Parents of final stage: List()
   23/04/21 16:33:10 INFO DAGScheduler: Missing parents: List()
   23/04/21 16:33:10 INFO DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[2] at save at SinkExecuteProcessor.java:125), which has no 
missing parents
   23/04/21 16:33:10 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 11.5 KiB, free 434.4 MiB)
   23/04/21 16:33:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes 
in memory (estimated size 5.6 KiB, free 434.4 MiB)
   23/04/21 16:33:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory 
on 192.168.2.185:42561 (size: 5.6 KiB, free: 434.4 MiB)
   23/04/21 16:33:10 INFO SparkContext: Created broadcast 0 from broadcast at 
DAGScheduler.scala:1513
   23/04/21 16:33:10 INFO DAGScheduler: Submitting 1 missing tasks from 
ResultStage 0 (MapPartitionsRDD[2] at save at SinkExecuteProcessor.java:125) 
(first 15 tasks are for partitions Vector(0))
   23/04/21 16:33:10 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks 
resource profile 0
   23/04/21 16:33:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 
0) (192.168.2.185, executor driver, partition 0, PROCESS_LOCAL, 4585 bytes) 
taskResourceAssignments Map()
   23/04/21 16:33:10 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
   23/04/21 16:33:10 INFO ConsoleSinkWriter: output rowType: name<STRING>
   23/04/21 16:33:10 INFO FakeSourceSplitEnumerator: Starting to calculate 
splits.
   23/04/21 16:33:10 INFO FakeSourceReader: wait split!
   23/04/21 16:33:10 INFO FakeSourceSplitEnumerator: Assigned 
[FakeSourceSplit(splitId=0, rowNum=1)] to 1 readers.
   23/04/21 16:33:10 INFO FakeSourceSplitEnumerator: Calculated splits 
successfully, the size of splits is 1.
   23/04/21 16:33:10 INFO FakeSourceSplitEnumerator: Assigning splits to 
readers 0 [FakeSourceSplit(splitId=0, rowNum=1)]
   23/04/21 16:33:11 INFO FakeSourceReader: 1 rows of data have been generated 
in split(0). Generation time: 1682065991894
   23/04/21 16:33:11 INFO ConsoleSinkWriter: subtaskIndex=0  rowIndex=1:  
SeaTunnelRow#tableId= SeaTunnelRow#kind=INSERT : QPFYX
   
   ----Block Here-------
   ```
   
   
   ### Flink or Spark Version
   
   spark 3.3.0
   
   ### Java or Scala Version
   
   OpenJDK 11
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to