HeartSaVioR commented on code in PR #40981:
URL: https://github.com/apache/spark/pull/40981#discussion_r1179859324


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##########
@@ -247,14 +253,7 @@ class RocksDB(
   }
 
   def prefixScan(prefix: Array[Byte]): Iterator[ByteArrayPair] = {
-    val threadId = Thread.currentThread().getId
-    val iter = prefixScanReuseIter.computeIfAbsent(threadId, tid => {

Review Comment:
   My memory is broken but there should be a reason I didn't follow the cleanup 
approach of iterator in iterator(). That relies on task completion listener, 
and I roughly remember there was an issue with iterator in prefixScan so I 
couldn't do the same. 
   
   If that was related to WriteBatch then we may not need this anymore and just 
follow the approach of iterator(), but honestly I didn't track down the issue 
as such.
   
   That said, if we remove this, we probably need to employ the cleanup 
approach for we use in iterator().



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to