shuwenwei commented on issue #11375: URL: https://github.com/apache/iotdb/issues/11375#issuecomment-1791991583
> > > > Hi, I noticed that you are writing a request size of more than 200M, which is actually too large. We suggest reducing the batch size to reduce the memory pressure on the server. > > > > In addition, in version 1.2.0, IoTConsensus was unable to synchronize requests larger than 100M in size, which could cause individual nodes to pile up the wal log after receiving a large request, up to 50GB. This issue was [fixed](https://github.com/apache/iotdb/pull/11145) in version 1.2.2. > > > > It is recommended to upgrade to 1.2.2, reduce the batch size appropriately and try again > > > > > > > > > Thansk for your advice. I have upgrade to 1.2.2 and WAL problem seems solved. But I have seen some **INFO** log about compaction memory both in 1.2.0 and 1.2.2, I have change the memory ratio of storage engine and it seems can not be avoid. It's this any affect? `2023-11-01 06:09:29,157 [pool-44-IoTDB-Compaction-Worker-8] INFO o.a.i.d.s.d.c.e.t.CrossSpaceCompactionTask:397 - No enough memory for current compaction task root.pre_trc-27-2922 task seq files are [file is /data1/iotdb/apache-iotdb-1.2.2-all-bin/data/datanode/data/sequence/root.pre_trc/27/2922/1697717961585-1-2-4.tsfile, status: COMPACTION_CANDIDATE] , unseq files are [file is /data1/iotdb/apache-iotdb-1.2.2-all-bin/data/datanode/data/unsequence/root.pre_trc/27/2922/1698751814360-23-1-0.tsfile, status: COMPACTION_CANDIDATE] org.apache.iotdb.db.storageengine.dataregion.compaction.execute.exception.CompactionMemoryNotEnoughException: Required memory cost 1115215382 bytes is greater than the total memory budget for compaction 823216046 bytes at org.apache.iotdb.db.storageengine.rescon.memory.SystemInfo.addCompactionMemoryCost(SystemInfo.java:223) at org.apache.iotdb.db.storageengine.dataregion.compaction.execute.task.CrossSpaceCompactionTask.checkValidAndSetMerging(CrossSpaceCompactionTask.java:389) at org.apache.iotdb.db.storageengine.dataregion.compaction.schedule.CompactionWorker.run(CompactionWorker.java:59) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)` > > > > > > Do you use aligned series for storage? > > yes, all time series is aligned Currently, the aligned series compaction may require more memory. You can continue to adjust the global memory allocation ratio and the memory allocation ratio of the storage engine, or simply ignore this log if other Compaction task can run successfully. ``` # Memory Allocation Ratio: StorageEngine, QueryEngine, SchemaEngine, Consensus, StreamingEngine and Free Memory. # The parameter form is a:b:c:d:e:f, where a, b, c, d, e and f are integers. for example: 1:1:1:1:1:1 , 6:2:1:1:1:1 # If you have high level of writing pressure and low level of reading pressure, please adjust it to for example 6:1:1:1:1:1 # datanode_memory_proportion=3:3:1:1:1:1 # Memory allocation ratio in StorageEngine: Write, Compaction # The parameter form is a:b:c:d, where a, b, c and d are integers. for example: 8:2 , 7:3 # storage_engine_memory_proportion=8:2 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
