comphead commented on PR #1649:
URL: 
https://github.com/apache/datafusion-comet/pull/1649#issuecomment-2822361960

   When testing I just realized the Apache Spark already send defaultFS with 
schema to the logical plan and then to Comet 
   
   So this test works and Spark sends `/tmp/2` to native site with prefixed 
`hdfs://namenode:9000/tmp/2`
   
   ```
     test("Test V1 parquet scan uses native_datafusion with HDFS") {
       withSQLConf(
         CometConf.COMET_ENABLED.key -> "true",
         CometConf.COMET_EXEC_ENABLED.key -> "true",
         CometConf.COMET_NATIVE_SCAN_IMPL.key -> 
CometConf.SCAN_NATIVE_DATAFUSION,
         SQLConf.USE_V1_SOURCE_LIST.key -> "parquet",
         "fs.defaultFS" -> "hdfs://namenode:9000",
         "dfs.client.use.datanode.hostname" -> "true") {
         val df = spark.read.parquet("/tmp/2")
         df.show(false)
         df.explain("extended")
       }
     }
   ```
   
   However when running spark-shell another param should be used `--conf 
spark.hadoop.fs.defaultFS=hdfs://namenode:9000`
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to