LuciferYang commented on PR #36467:
URL: https://github.com/apache/spark/pull/36467#issuecomment-1123039782

   > > The issue to be solved is: Ensure all RocksDBIterators are clsoed before 
the RocksDB is closed, otherwise the JVM will crash when use a rocksdb with 
debug mode enabled.
   > 
   > This is already handled by `iteratorTracker`, right ? RocksDB.close() 
already iterates over all `RocksDBIterator` and closes them before closing 
itself. (This will also mean that it.close within db.close will need to handle 
the corner case - pass a forceClose even if `notifyIteratorClosed` returns 
`false`)
   
   If the definition of `iteratorTracker` is changed from 
`ConcurrentLinkedQueue<Reference<RocksDBIterator<?>>>` to 
`ConcurrentLinkedQueue<RocksDBIterator<?>> `, then you are right.
   
   However, `iteratorTracker` holds `RocksDBIterator` WeakReference now, so it 
doesn't hold all unclosed `RocksDBIterator` due to JVM GC, some 
`RocksDBIterator` will be held by the JVM `Finalizer` thread. We cannot 
interfere with the running state of `Finalizer`, so there is no guarantee that 
all not closed `RocksDBIterator` in `Finalizer` have been closed before 
`RocksDB.close` is executed. At the same time, The thread executing 
`RocksDB.close()` and `Finalizer` thread need the same lock, so if 
`RocksDB.close()` executes first, pending `db.closeIterator(this)` in 
`Finalizer` can only wait.
   
   
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to