Github user lresende commented on a diff in the pull request:

    https://github.com/apache/bahir/pull/59#discussion_r157549138
  
    --- Diff: 
sql-cloudant/examples/src/main/scala/org/apache/spark/examples/sql/cloudant/CloudantStreaming.scala
 ---
    @@ -27,59 +27,57 @@ import org.apache.bahir.cloudant.CloudantReceiver
     
     object CloudantStreaming {
       def main(args: Array[String]) {
    -    val sparkConf = new SparkConf().setAppName("Cloudant Spark SQL 
External Datasource in Scala")
    +    val sparkConf = new SparkConf().setMaster("local[*]")
    +      .setAppName("Cloudant Spark SQL External Datasource in Scala")
         // Create the context with a 10 seconds batch size
         val ssc = new StreamingContext(sparkConf, Seconds(10))
     
         val changes = ssc.receiverStream(new CloudantReceiver(sparkConf, Map(
    -      "cloudant.host" -> "ACCOUNT.cloudant.com",
    -      "cloudant.username" -> "USERNAME",
    -      "cloudant.password" -> "PASSWORD",
    -      "database" -> "n_airportcodemapping")))
    -
    +      "cloudant.host" -> "examples.cloudant.com",
    +      "database" -> "sales")))
         changes.foreachRDD((rdd: RDD[String], time: Time) => {
           // Get the singleton instance of SparkSession
           val spark = 
SparkSessionSingleton.getInstance(rdd.sparkContext.getConf)
    --- End diff --
    
    I would just create the "spark" instance of SparkSession on the top of the 
method (where we create SparkConf) and do the import of implicits there instead 
of inside of the forEach. This would also remove the necessity of the singleton 
object on the bottom.


---

Reply via email to