Dell - Internal Use - Confidential
Hi all,

I got the following error message (see below) when Spark (standalone, local) 
write into the local Windows file system. Other Java application can write and 
read from the local disk without problem.
Has anyone heard or read about a similar issue?
This may not be related to Spark deployment...

-----------------------
java.io<http://java.io.IO>.IOException (java.io.IOException: Cannot run program 
"cygpath": CreateProcess error=2, The system cannot find the file specified)

java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
org.apache.hadoop.util.Shell.runCommand(Shell.java:206)
org.apache.hadoop.util.Shell.run(Shell.java:188)
org.apache.hadoop.fs.FileUtil$CygPathCommand.<init>(FileUtil.java:412)
org.apache.hadoop.fs.FileUtil.makeShellPath(FileUtil.java:438)
org.apache.hadoop.fs.FileUtil.makeShellPath(FileUtil.java:465)
org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:592)
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:584)
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:427)
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:465)
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:433)
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886)
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:781)
org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
org.apache.hadoop.mapred.SparkHadoopWriter.open(SparkHadoopWriter.scala:86)
org.apache.spark.rdd.PairRDDFunctions.writeToFile$1(PairRDDFunctions.scala:667)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$2.apply(PairRDDFunctions.scala:680)



Environment
---------------
OS: Window 7 enterprise 64-bit
JDK 1.7.0_45  64 bit
Spark 0.8.0

Reply via email to