yuqi1129 commented on a change in pull request #2353:
URL: https://github.com/apache/iotdb/pull/2353#discussion_r562316844
##########
File path:
server/src/main/java/org/apache/iotdb/db/engine/storagegroup/StorageGroupProcessor.java
##########
@@ -969,87 +969,86 @@ private TsFileProcessor
getOrCreateTsFileProcessorIntern(long timeRangeId,
boolean sequence)
throws IOException, DiskSpaceInsufficientException {
- TsFileProcessor res;
- // we have to ensure only one thread can change
workSequenceTsFileProcessors
- writeLock();
- try {
- res = tsFileProcessorTreeMap.get(timeRangeId);
- if (res == null) {
- // we have to remove oldest processor to control the num of the
memtables
- // TODO: use a method to control the number of memtables
- if (tsFileProcessorTreeMap.size()
- >=
IoTDBDescriptor.getInstance().getConfig().getConcurrentWritingTimePartition()) {
- Map.Entry<Long, TsFileProcessor> processorEntry =
tsFileProcessorTreeMap.firstEntry();
- logger.info(
- "will close a {} TsFile because too many active partitions ({} >
{}) in the storage group {},",
- sequence, tsFileProcessorTreeMap.size(),
-
IoTDBDescriptor.getInstance().getConfig().getConcurrentWritingTimePartition(),
- storageGroupName);
- asyncCloseOneTsFileProcessor(sequence, processorEntry.getValue());
- }
+ TsFileProcessor res = tsFileProcessorTreeMap.get(timeRangeId);
- // build new processor
- TsFileProcessor newProcessor = createTsFileProcessor(sequence,
timeRangeId);
- tsFileProcessorTreeMap.put(timeRangeId, newProcessor);
- tsFileManagement.add(newProcessor.getTsFileResource(), sequence);
- res = newProcessor;
- }
+ //Use double-check to shorten the lock range
+ if (null == res) {
+ writeLock();
Review comment:
Yes, further optimization about the lock can be made, i will remove
related duplication code later, thanks
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]