Hi
This is because that you use cluster mode, but the input file is local file.
1.If you use cluster mode, please load hadoop files
2.If you just want to load local files, please use local mode.
李寅威 wrote
> Hi,
>
> when i run the following script:
>
>
> scala>val dataFilePath = new
> File("/carbondata/pt/sample.csv").getCanonicalPath
> scala>cc.sql(s"load data inpath '$dataFilePath' into table test_table")
>
>
> is turns out:
>
>
> org.apache.carbondata.processing.etl.DataLoadingException: The input file
> does not exist:
> hdfs://master:9000hdfs://master/opt/data/carbondata/pt/sample.csv
> at
> org.apache.spark.util.FileUtils$$anonfun$getPaths$1.apply$mcVI$sp(FileUtils.scala:66)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>
>
> It confused me that why there is a string "hdfs://master:9000" before
> "hdfs://master/opt/data/carbondata/pt/sample.csv", I can't found some
> configuration that contains "hdfs://master:9000", could any one help me~
--
View this message in context:
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-input-file-does-not-exist-tp4853p4854.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at
Nabble.com.