>From 
>examples/src/main/scala/org/apache/spark/examples/streaming/HdfsWordCount.scala
:

    val lines = ssc.textFileStream(args(0))
    val words = lines.flatMap(_.split(" "))

In your case, looks like inputfile didn't correspond to an existing path.

On Fri, Jul 15, 2016 at 1:05 AM, RK Spark <rkphdsp...@gmail.com> wrote:

> val count = inputfile.flatMap(line => line.split(" ")).map(word =>
> (word,1)).reduceByKey(_ + _);
> org.apache.hadoop.mapred.InvalidInputException: Input path does not exist:
>

Reply via email to