Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5645#discussion_r29211121
  
    --- Diff: 
streaming/src/main/scala/org/apache/spark/streaming/rdd/WriteAheadLogBackedBlockRDD.scala
 ---
    @@ -96,9 +99,27 @@ class WriteAheadLogBackedBlockRDD[T: ClassTag](
             logDebug(s"Read partition data of $this from block manager, block 
$blockId")
             iterator
           case None => // Data not found in Block Manager, grab it from write 
ahead log file
    -        val reader = new WriteAheadLogRandomReader(partition.segment.path, 
hadoopConf)
    -        val dataRead = reader.read(partition.segment)
    -        reader.close()
    +        var dataRead: ByteBuffer = null
    --- End diff --
    
    Why allocate two (at least) objects when it is completely obvious not going 
to be used. The null does not get exposed to anything outside the function, and 
hence is okay to have. 
    
    If you look at rest of the Spark source code, we dont strictly adhere to 
Scala-way of doing things, rather balance code understandability (limit the 
levels of functional nesting) and efficiency (while loops instead of for when 
perf matters) with Scala styles. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to