Hi all,

I am sorry to bother you, I have a problem and I hope to get your help. I want 
to use spark to customize the data source based on ParquetDataSourceV2 class in 
spark v3.2.2, but I want to leave out the location field and then modify the 
table and partition path in the code. How can I do that?

This is my DDL:
```

CREATE EXTERNAL TABLE `DEFAULT`.`USER_PARQUET_READ` (

`id` BIGINT COMMENT '',

`name` STRING COMMENT '')

USING com.kyligence.spark.datasources.DefaultSource

PARTITIONED BY (`asOfDate` DATE COMMENT 'PARTITIONED KEY')

LOCATION ' ‘ —> this value is empty string, then I need to set a value in Scala 
code , how can I do that ?

TBLPROPERTIES (

  'transient_lastDdlTime' = '1641864235',

'skip.header.line.count' = '1')

```

Thanks & Best regards


Zhuolin Ji
Software Engineer



Mobile: +1 312 451 6352
E-Mail: zhuolin...@kyligence.io<mailto:yana.hu...@kyligence.io>
Address: 99 Almaden Blvd Ste 663, San Jose, CA 95113
www.kyligence.io<applewebdata://CC97B8B1-EF65-4A05-886F-BE8139B129E8/Kyligence.io>

[image001.png]

Reply via email to