amaliujia commented on PR #40498:
URL: https://github.com/apache/spark/pull/40498#issuecomment-1478220179

   @hvanhovell existing codebase uses this to verify the options:
    
   ```
     test("SPARK-32844: DataFrameReader.table take the specified options for V1 
relation") {
       withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> "parquet") {
         withTable("t") {
           sql("CREATE TABLE t(i int, d double) USING parquet OPTIONS 
('p1'='v1', 'p2'='v2')")
   
           val df = spark.read.option("P2", "v2").option("p3", "v3").table("t")
           val options = df.queryExecution.analyzed.collectFirst {
             case r: LogicalRelation => 
r.relation.asInstanceOf[HadoopFsRelation].options
           }.get
           assert(options("p2") == "v2")
           assert(options("p3") == "v3")
         }
       }
     }
     ```
     
     However this won't work for Spark Connect. Do you know if we have another 
way to achieve it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to