zhangyue19921010 commented on pull request #3666:
URL: https://github.com/apache/hudi/pull/3666#issuecomment-919852009


   Here 's driver log 
   ```
   21/09/15 08:26:13 INFO client.SparkRDDWriteClient: Committing Clustering 
20210915043458. Finished with result xxxxxxx
   21/09/15 08:26:13 INFO timeline.HoodieActiveTimeline: Checking for file 
exists 
?s3a://xxx/xxxx/xxx/xxxx_delivered_hourly/.hoodie/20210915043458.replacecommit.inflight
   21/09/15 08:26:14 INFO timeline.HoodieActiveTimeline: Create new file for 
toInstant 
?s3a://xxxx/xxx/xxxxxx/xxxxx_delivered_hourly/.hoodie/20210915043458.replacecommit
   21/09/15 08:26:14 INFO client.SparkRDDWriteClient: Clustering successfully 
on commit 20210915043458
   ```
   
   Here's one of the executor log
   ```
   21/09/15 08:28:39 INFO queue.IteratorBasedQueueProducer: starting to buffer 
records
   21/09/15 08:28:39 INFO queue.BoundedInMemoryExecutor: starting consumer 
thread
   21/09/15 08:28:39 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:28:39 INFO table.MarkerFiles: Creating Marker 
Path=s3a://xxxxx/xxx/xxxx/xxxxx_delivered_hourly/.hoodie/.temp/20210915043458/2021080920/eb3c98ba-1df9-41c6-bd7c-a4708f4728db-0_3-82-2226_20210915043458.parquet.marker.CREATE
   21/09/15 08:28:40 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:28:40 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:28:40 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:28:40 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:28:40 INFO io.HoodieCreateHandle: New CreateHandle for partition 
:2021080920 with fileId eb3c98ba-1df9-41c6-bd7c-a4708f4728db-0
   21/09/15 08:29:59 INFO hadoop.InternalParquetRecordWriter: mem size 
127089967 > 125829120: flushing 1820100 records to disk.
   21/09/15 08:29:59 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 90099622
   21/09/15 08:31:28 INFO hadoop.InternalParquetRecordWriter: mem size 
128373302 > 125829120: flushing 2080100 records to disk.
   21/09/15 08:31:28 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 92189808
   21/09/15 08:32:36 INFO hadoop.InternalParquetRecordWriter: mem size 
127167016 > 125829120: flushing 1570100 records to disk.
   21/09/15 08:32:36 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 88982724
   21/09/15 08:33:53 INFO hadoop.InternalParquetRecordWriter: mem size 
128862551 > 125829120: flushing 1831855 records to disk.
   21/09/15 08:33:53 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 91370793
   21/09/15 08:35:20 INFO hadoop.InternalParquetRecordWriter: mem size 
131655158 > 125829120: flushing 2080100 records to disk.
   21/09/15 08:35:20 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 92697898
   21/09/15 08:36:48 INFO hadoop.InternalParquetRecordWriter: mem size 
131995642 > 125829120: flushing 2090100 records to disk.
   21/09/15 08:36:48 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 89312422
   21/09/15 08:38:15 INFO hadoop.InternalParquetRecordWriter: mem size 
129924890 > 125829120: flushing 2080100 records to disk.
   21/09/15 08:38:15 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 92490914
   21/09/15 08:39:43 INFO hadoop.InternalParquetRecordWriter: mem size 
132976241 > 125829120: flushing 2090100 records to disk.
   21/09/15 08:39:43 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 93174323
   21/09/15 08:41:11 INFO hadoop.InternalParquetRecordWriter: mem size 
127472104 > 125829120: flushing 2080100 records to disk.
   21/09/15 08:41:11 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 90252952
   21/09/15 08:42:27 INFO hadoop.InternalParquetRecordWriter: mem size 
126290145 > 125829120: flushing 1820100 records to disk.
   21/09/15 08:42:27 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 88456116
   21/09/15 08:43:54 INFO hadoop.InternalParquetRecordWriter: mem size 
130410565 > 125829120: flushing 2090100 records to disk.
   21/09/15 08:43:54 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 89164496
   21/09/15 08:45:21 INFO hadoop.InternalParquetRecordWriter: mem size 
131209888 > 125829120: flushing 2090100 records to disk.
   21/09/15 08:45:21 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 87793573
   21/09/15 08:46:39 INFO hadoop.InternalParquetRecordWriter: mem size 
126494819 > 125829120: flushing 1820100 records to disk.
   21/09/15 08:46:39 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 88404896
   21/09/15 08:47:24 INFO hadoop.InternalParquetRecordWriter: mem size 
129103207 > 125829120: flushing 1040100 records to disk.
   21/09/15 08:47:24 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 80514268
   21/09/15 08:48:41 INFO hadoop.InternalParquetRecordWriter: mem size 
128490439 > 125829120: flushing 1820100 records to disk.
   21/09/15 08:48:41 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 89663525
   21/09/15 08:50:10 INFO hadoop.InternalParquetRecordWriter: mem size 
132832927 > 125829120: flushing 2088626 records to disk.
   21/09/15 08:50:10 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 94414992
   21/09/15 08:50:11 INFO io.HoodieCreateHandle: Closing the file 
eb3c98ba-1df9-41c6-bd7c-a4708f4728db-0 as we are done with all the records 
30491881
   21/09/15 08:50:11 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 0
   21/09/15 08:50:18 INFO io.HoodieCreateHandle: CreateHandle for partitionPath 
2021080920 fileID eb3c98ba-1df9-41c6-bd7c-a4708f4728db-0, took 1299522 ms.
   21/09/15 08:50:18 INFO table.MarkerFiles: Creating Marker 
Path=s3a://xxxxx/xxx/xxxxx/xxxxx_delivered_hourly/.hoodie/.temp/20210915043458/2021080920/eb3c98ba-1df9-41c6-bd7c-a4708f4728db-1_3-82-2226_20210915043458.parquet.marker.CREATE
   21/09/15 08:50:19 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:50:19 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:50:19 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:50:19 INFO fs.FSUtils: Hadoop Configuration: fs.defaultFS: 
[hdfs://dev-HBase-hadoop-aws], Config:[Configuration: ], FileSystem: 
[org.apache.hadoop.fs.s3a.S3AFileSystem@6df37e26]
   21/09/15 08:50:19 INFO io.HoodieCreateHandle: New CreateHandle for partition 
:2021080920 with fileId eb3c98ba-1df9-41c6-bd7c-a4708f4728db-1
   21/09/15 08:51:47 INFO hadoop.InternalParquetRecordWriter: mem size 
126517453 > 125829120: flushing 2080100 records to disk.
   21/09/15 08:51:47 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 88199400
   21/09/15 08:53:18 INFO hadoop.InternalParquetRecordWriter: mem size 
129113451 > 125829120: flushing 2080100 records to disk.
   21/09/15 08:53:18 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 91238066
   21/09/15 08:54:28 INFO hadoop.InternalParquetRecordWriter: mem size 
125984672 > 125829120: flushing 1570100 records to disk.
   21/09/15 08:54:28 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 87981933
   21/09/15 08:55:47 INFO hadoop.InternalParquetRecordWriter: mem size 
126144533 > 125829120: flushing 1820100 records to disk.
   21/09/15 08:55:47 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 88299835
   21/09/15 08:57:05 INFO hadoop.InternalParquetRecordWriter: mem size 
127071756 > 125829120: flushing 1830100 records to disk.
   21/09/15 08:57:05 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 89667896
   21/09/15 08:58:24 INFO hadoop.InternalParquetRecordWriter: mem size 
127160360 > 125829120: flushing 1830100 records to disk.
   21/09/15 08:58:24 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 89757682
   21/09/15 08:58:26 INFO queue.IteratorBasedQueueProducer: finished buffering 
records
   21/09/15 08:58:27 INFO io.HoodieCreateHandle: Closing the file 
eb3c98ba-1df9-41c6-bd7c-a4708f4728db-1 as we are done with all the records 
11244525
   21/09/15 08:58:27 INFO hadoop.InternalParquetRecordWriter: Flushing mem 
columnStore to file. allocated memory: 14709580
   21/09/15 08:58:30 INFO io.HoodieCreateHandle: CreateHandle for partitionPath 
2021080920 fileID eb3c98ba-1df9-41c6-bd7c-a4708f4728db-1, took 492192 ms.
   21/09/15 08:58:30 INFO queue.BoundedInMemoryExecutor: Queue Consumption is 
done; notifying producer threads
   21/09/15 08:58:30 INFO memory.MemoryStore: Block rdd_225_3 stored as values 
in memory (estimated size 771.0 B, free 30.9 GB)
   21/09/15 08:58:30 INFO executor.Executor: Finished task 3.0 in stage 82.0 
(TID 2226). 1182 bytes result sent to driver
   21/09/15 08:58:30 INFO executor.CoarseGrainedExecutorBackend: Driver 
commanded a shutdown
   21/09/15 08:58:30 INFO executor.CoarseGrainedExecutorBackend: Driver from 
ip-10-23-24-94.ec2.internal:45530 disconnected during shutdown
   21/09/15 08:58:30 INFO executor.CoarseGrainedExecutorBackend: Driver from 
ip-10-23-24-94.ec2.internal:45530 disconnected during shutdown
   21/09/15 08:58:31 INFO memory.MemoryStore: MemoryStore cleared
   21/09/15 08:58:31 INFO storage.BlockManager: BlockManager stopped
   21/09/15 08:58:31 INFO util.ShutdownHookManager: Shutdown hook called
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to