Well, In the source code of carbondata, the filetype is determined as :

if (property.startsWith(CarbonUtil.HDFS_PREFIX)) {
        storeDefaultFileType = FileType.HDFS;
      }


and  CarbonUtil.HDFS_PREFIX="hdfs://"


but when I run the following script, the dataFilePath is still local:


scala> val dataFilePath = new 
File("hdfs://master:9000/carbondata/sample.csv").getCanonicalPath
dataFilePath: String = 
/home/hadoop/carbondata/hdfs:/master:9000/carbondata/sample.csv





------------------ ???????? ------------------
??????: "Liang Chen";<[email protected]>;
????????: 2016??12??22??(??????) ????8:47
??????: "dev"<[email protected]>; 

????: Re: etl.DataLoadingException: The input file does not exist



Hi

This is because that you use cluster mode, but the input file is local file.
1.If you use cluster mode, please load hadoop files
2.If you just want to load local files, please use local mode. 


?????? wrote
> Hi,
> 
> when i run the following script:
> 
> 
> scala>val dataFilePath = new
> File("/carbondata/pt/sample.csv").getCanonicalPath
> scala>cc.sql(s"load data inpath '$dataFilePath' into table test_table")
> 
> 
> is turns out:
> 
> 
> org.apache.carbondata.processing.etl.DataLoadingException: The input file
> does not exist:
> hdfs://master:9000hdfs://master/opt/data/carbondata/pt/sample.csv
>       at
> org.apache.spark.util.FileUtils$$anonfun$getPaths$1.apply$mcVI$sp(FileUtils.scala:66)
>       at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> 
> 
> It confused me that why there is a string "hdfs://master:9000" before
> "hdfs://master/opt/data/carbondata/pt/sample.csv", I can't found some
> configuration that contains "hdfs://master:9000", could any one help me~





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-input-file-does-not-exist-tp4853p4854.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.

Reply via email to