FANNG1 commented on code in PR #7920: URL: https://github.com/apache/gravitino/pull/7920#discussion_r2251221450
########## docs/flink-connector/flink-connector.md: ########## @@ -38,28 +38,33 @@ This capability allows users to perform federation queries, accessing data from Set the flink configuration in flink-conf.yaml. ```yaml table.catalog-store.kind: gravitino -table.catalog-store.gravitino.gravitino.metalake: test +table.catalog-store.gravitino.gravitino.metalake: metalake_demo table.catalog-store.gravitino.gravitino.uri: http://localhost:8090 ``` Or you can set the flink configuration in the `TableEnvironment`. ```java final Configuration configuration = new Configuration(); configuration.setString("table.catalog-store.kind", "gravitino"); -configuration.setString("table.catalog-store.gravitino.gravitino.metalake", "test"); +configuration.setString("table.catalog-store.gravitino.gravitino.metalake", "metalake_demo"); configuration.setString("table.catalog-store.gravitino.gravitino.uri", "http://localhost:8090"); EnvironmentSettings.Builder builder = EnvironmentSettings.newInstance().withConfiguration(configuration); TableEnvironment tableEnv = TableEnvironment.create(builder.inBatchMode().build()); ``` -3. Execute the Flink SQL query. +3. Add necessary jar files to Flink's `lib` sub-folder. +To run Flink with Gravitino connector and then access the data sources like Hive, Iceberg and others, you need to put the necessary jar files into Flink's `lib` folder. For example, to access Hive, you need to add at least these jar files: `flink-connector-hive_{version}.jar`, `hive-metastore-{version}.jar`, `hive-exec-{version}-core.jar`, `hive-serde-{version}.jar`, `hive-shims-common-{version}.jar`, `hive-storage-api-{version}.jar`, etc. Review Comment: Flink package doesn't contain hive jars? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
