Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21669#discussion_r215427665
  
    --- Diff: examples/src/main/scala/org/apache/spark/examples/HdfsTest.scala 
---
    @@ -41,6 +41,8 @@ object HdfsTest {
           val end = System.currentTimeMillis()
           println(s"Iteration $iter took ${end-start} ms")
         }
    +    println(s"File contents: ${file.map(s => 
s.toString).collect().mkString(",")}")
    --- End diff --
    
    Spark's convention is either `.map { s=> s.toString }` or 
`.map(_.toString)`. I noticed this in a few other places too.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to