Hi,
I got a strange error that carbonContext can not read parquet file on hdfs
in my program. I test on spark shell and it returns the same error,
anything goes wrong?

>>>>>>Test for sqlContext, Success
----------------------------------------------------------------------------------------------------------------

Spark context available as sc (master = yarn-client, app id =
application_1479961381214_0551).

SQL context available as sqlContext.



scala> val parquetFile = sqlContext.read.parquet("/
user/hive/default/testdata_parquet_all")

parquetFile: org.apache.spark.sql.DataFrame = [id: double,***************...





>>>>>>Test for CarbonContext, Failed

------------------------------------------------------------------------------------------------------------------

Spark context available as sc (master = yarn-client, app id =
application_1479961381214_0552).Hi

SQL context available as sqlContext.



scala> import org.apache.spark.sql.CarbonContext

import org.apache.spark.sql.CarbonContext



scala> val cc = new CarbonContext(sc)

cc: org.apache.spark.sql.CarbonContext = org.apache.spark.sql.
CarbonContext@3574122f



scala> val parquetFile = cc.read.parquet("/user/hive/
default/testdata_parquet_all")

AUDIT 30-11 13:42:24,114 - [*******][appuser][Thread-1]Creating timestamp
file for .

java.io.IOException: No such file or directory

         at java.io.UnixFileSystem.createFileExclusively(Native Method)

         at java.io.File.createNewFile(File.java:1006)

         at org.apache.carbondata.core.datastorage.store.impl.
FileFactory.createNewFile(FileFactory.java:372)

         at org.apache.spark.sql.hive.CarbonMetastoreCatalog.
updateSchemasUpdatedTime(CarbonMetastoreCatalog.scala:468)

         at org.apache.spark.sql.hive.CarbonMetastoreCatalog.loadMetadata(
CarbonMetastoreCatalog.scala:181)

         at org.apache.spark.sql.hive.CarbonMetastoreCatalog.<init>(
CarbonMetastoreCatalog.scala:114)

Reply via email to