[ 
https://issues.apache.org/jira/browse/HUDI-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Sumit updated HUDI-4580:
------------------------------
    Summary: [DOCS] Update quickstart: Spark SQL create table statement fails 
with "partitioned by"  (was: Spark SQL create table statement fails with 
"partitioned by")

> [DOCS] Update quickstart: Spark SQL create table statement fails with 
> "partitioned by"
> --------------------------------------------------------------------------------------
>
>                 Key: HUDI-4580
>                 URL: https://issues.apache.org/jira/browse/HUDI-4580
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: Ethan Guo
>            Assignee: Sagar Sumit
>            Priority: Blocker
>             Fix For: 0.12.0
>
>
> Spark 3.2.2, Hudi master
> Steps to reproduce
> {code:java}
> Spark shell
> export SPARK_HOME=/Users/ethan/Work/lib/spark-3.2.2-bin-hadoop3.2
> spark-3.2.2-bin-hadoop3.2/bin/spark-shell \
>   --master local[6] \
>   --driver-memory 5g \
>   --num-executors 6 --executor-cores 1 \
>   --executor-memory 1g \
>   --conf spark.ui.port=5555 \
>   --conf spark.driver.maxResultSize=1g \
>   --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
>   --conf 
> 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
>  \
>   --conf 
> 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
>   --conf spark.sql.catalogImplementation=in-memory \
>   --conf 
> spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.DefaultAWSCredentialsProviderChain
>  \
>   --jars 
> $HUDI_DIR/packaging/hudi-spark-bundle/target/hudi-spark3.2-bundle_2.12-0.13.0-SNAPSHOT.jar
>   
> Prepare dataset in spark shell
> // spark-shell
> import org.apache.hudi.QuickstartUtils._
> import scala.collection.JavaConversions._
> import org.apache.spark.sql.SaveMode._
> import org.apache.hudi.DataSourceReadOptions._
> import org.apache.hudi.DataSourceWriteOptions._
> import org.apache.hudi.config.HoodieWriteConfig._val tableName = 
> "hudi_trips_cow"
> val basePath = "file:///tmp/hudi_trips_cow"
> val dataGen = new DataGeneratorval inserts = 
> convertToStringList(dataGen.generateInserts(10))
> val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
> df.write.format("hudi").
>   options(getQuickstartWriteConfigs).
>   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>   option(TABLE_NAME, tableName).
>   mode(Overwrite).
>   save(basePath)
>   
> Spark SQL
> spark-3.2.2-bin-hadoop3.2/bin/spark-sql \
>   --master local[6] \
>   --driver-memory 5g \
>   --num-executors 6 --executor-cores 1 \
>   --executor-memory 1g \
>   --conf spark.ui.port=5555 \
>   --conf spark.driver.maxResultSize=1g \
>   --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
>   --conf 
> 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
>  \
>   --conf 
> 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
>   --conf spark.sql.catalogImplementation=in-memory \
>   --conf 
> spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.DefaultAWSCredentialsProviderChain
>  \
>   --jars 
> $HUDI_DIR/packaging/hudi-spark-bundle/target/hudi-spark3.2-bundle_2.12-0.13.0-SNAPSHOT.jar
>  
> spark-sql> create table hudi_trips_cow_ext using hudi
>          > partitioned by (partitionpath)
>          > location 'file:///tmp/hudi_trips_cow';
> Error in query: It is not allowed to specify partition columns when the table 
> schema is not defined. When the table schema is not provided, schema and 
> partition columns will be inferred.{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to