HI,

i configured th pycharm like describe on stack overflow with spark_home and
hadoop_conf_dir and donwload winutils to use it with prebuild version of
spark 2.0  (pyspark 2.0)

and i get this error i f you can help me to find  solution thanks

C:\Users\AppData\Local\Continuum\Anaconda2\python.exe
C:/workspacecode/pyspark/pyspark/churn/test.py --master local[*]
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/08/05 15:32:33 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
16/08/05 15:32:35 WARN Utils: Service 'SparkUI' could not bind on port
4040. Attempting port 4041.
Traceback (most recent call last):
  File "C:/workspacecode/pyspark/pyspark/churn/test.py", line 11, in
<module>
    print rdd.first()
  File "C:\spark-2.0.0-bin-hadoop2.6\python\pyspark\rdd.py", line 1328, in
first
    rs = self.take(1)
  File "C:\spark-2.0.0-bin-hadoop2.6\python\pyspark\rdd.py", line 1280, in
take
    totalParts = self.getNumPartitions()
  File "C:\spark-2.0.0-bin-hadoop2.6\python\pyspark\rdd.py", line 356, in
getNumPartitions
    return self._jrdd.partitions().size()
  File
"C:\spark-2.0.0-bin-hadoop2.6\python\lib\py4j-0.10.1-src.zip\py4j\java_gateway.py",
line 933, in __call__
  File
"C:\spark-2.0.0-bin-hadoop2.6\python\lib\py4j-0.10.1-src.zip\py4j\protocol.py",
line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o21.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not
exist: file:/C:workspacecode/rapexp1412.csv
    at
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
    at
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
    at
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    at
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    at
org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:60)
    at
org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:211)
    at java.lang.T

Reply via email to