No, look at the Spark UI. You can see all three were executed. On Thu, Apr 16, 2015 at 12:05 PM, Vinod KC <vinod.kc...@gmail.com> wrote: > Hi Sean, > > In paste mode , shell is evaluating only last action.It ignores previous > actions > > .ie , it is not executing actions > textFile.count() and textFile.first > > Thanks > Vinod > > I'm not sure I understand what you are suggesting is wrong. It prints the > result of the last command. In the second case that is the whole pasted > block so you see 19. > On Apr 16, 2015 11:37 AM, "vinodkc" <[hidden email]> wrote: > >> Hi All, >> >> I faced below issue while working with spark. It seems spark shell paste >> mode is not consistent >> >> Example code >> --------------- >> val textFile = sc.textFile("README.md") >> textFile.count() >> textFile.first() >> val linesWithSpark = textFile.filter(line => line.contains("Spark")) >> textFile.filter(line => line.contains("Spark")).count() >> >> Step 1 : Run above code in spark-shell >> -------- >> >> scala> val textFile = sc.textFile("README.md") >> textFile: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] >> at textFile at <console>:21 >> >> scala> textFile.count() >> res0: Long = 98 >> >> scala> textFile.first() >> res1: String = # Apache Spark >> >> scala> val linesWithSpark = textFile.filter(line => >> line.contains("Spark")) >> linesWithSpark: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at >> filter at <console>:23 >> >> scala> textFile.filter(line => line.contains("Spark")).count() >> res2: Long = 19 >> >> Result 1: Following actions are evaluated properly >> textFile.count() ,textFile.first() ,textFile.filter(line => >> line.contains("Spark")).count() >> res0: Long = 98,res1: String = # Apache Spark,res2: Long = 19 >> >> Step 2 : Run above code in spark-shell paste mode >> scala> :p >> // Entering paste mode (ctrl-D to finish) >> >> val textFile = sc.textFile("README.md") >> textFile.count() >> textFile.first() >> val linesWithSpark = textFile.filter(line => line.contains("Spark")) >> textFile.filter(line => line.contains("Spark")).count() >> >> // Exiting paste mode, now interpreting. >> >> textFile: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] >> at textFile at <console>:21 >> linesWithSpark: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at >> filter at <console>:24 >> res0: Long = 19 >> >> scala> >> >> Result 2: Only one action is executed >> textFile.filter(line => line.contains("Spark")).count() >> res0: Long = 19 >> >> Expected result : Result 1 and Result 2 should be same >> >> I feel this is an issue with spark shell . I fixed and verified it >> locally.If community also think that it need to be handled, I can make a >> PR. >> >> Thanks >> Vinod KC >> >> >> >> -- >> View this message in context: >> >> http://apache-spark-developers-list.1001551.n3.nabble.com/spark-shell-paste-mode-is-not-consistent-tp11621.html >> Sent from the Apache Spark Developers List mailing list archive at >> Nabble.com. >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [hidden email] >> For additional commands, e-mail: [hidden email] >> >> > > > ________________________________ > If you reply to this email, your message will be added to the discussion > below: > http://apache-spark-developers-list.1001551.n3.nabble.com/spark-shell-paste-mode-is-not-consistent-tp11621p11622.html > To start a new topic under Apache Spark Developers List, email > ml-node+s1001551n...@n3.nabble.com > To unsubscribe from Apache Spark Developers List, click here. > NAML
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org