rymurr commented on pull request #1783:
URL: https://github.com/apache/iceberg/pull/1783#issuecomment-731137385


   @jackye1995 I think I misunderstood your original comment. When you said:
   
   ```
   df.write.format("iceberg")
     .mode("append")
     .option("catalog-impl", "com.my.own.CatalogImpl")
     .save("testing.foo")
   ```
   
   Did you mean then use `catalog-impl` parameter to look up an existing 
iceberg catalog by type? Which (hopefully) would have been constructed 
previously, so we wouldn't have to set extra options like I mentioned above. 
Does that sound right? While much simpler I think its important that the user 
doesn't have to set options in the `read`/`write` spark commands, rather set 
them once in `SparkConf`.
   
   I wonder if we could set a default catalog in `SparkConf`` (eg 
`spark.iceberg.default_catalog=com.my.own.CatalogImpl`)? If set the 
readers/writers will use that unless an `option` is passed. If neither is 
passed it would default to the current Hive/HDFS impl.
   
   I believe that this still causes an issue in Spark2 as it doesn't have the 
same (spark) catalog support. 
   
   Thoughts?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to