FANNG1 commented on code in PR #6212: URL: https://github.com/apache/gravitino/pull/6212#discussion_r1923193579
########## docs/spark-connector/spark-catalog-jdbc.md: ########## @@ -0,0 +1,69 @@ +--- +title: "Spark connector JDBC catalog" +slug: /spark-connector/spark-catalog-jdbc +keyword: spark connector jdbc catalog +license: "This software is licensed under the Apache License version 2." +--- + +The Apache Gravitino Spark connector offers the capability to read JDBC tables, with the metadata managed by the Gravitino server. To enable the use of the JDBC catalog within the Spark connector, you must download the jdbc driver jar which you used to Spark classpath. + +## Capabilities + +#### Support DML and DDL operations: + +- `CREATE TABLE` +- `DROP TABLE` +- `ALTER TABLE` +- `SELECT` +- `INSERT` + + :::info + JDBCTable does not support distributed transaction. When writing data to RDBMS, each task is an independent transaction. If some tasks of spark succeed and some tasks fail, dirty data is generated. + ::: + +#### Not supported operations: Review Comment: please add empty line after the header -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
