1996fanrui commented on code in PR #741:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/741#discussion_r1453004963


##########
docs/content/docs/custom-resource/autoscaler.md:
##########
@@ -260,17 +260,46 @@ job.autoscaler.metrics.window : 3m
 > `ScalingReport` will show the recommended parallelism for each vertex.
 
 After the flink job starts, please start the StandaloneAutoscaler process by 
the
-following command.
+following command. Please download released autoscaler-standalone jar from 
+[here](https://repo.maven.apache.org/maven2/org/apache/flink/flink-autoscaler-standalone/)
 first.
 
 ```
 java -cp flink-autoscaler-standalone-{{< version >}}.jar \
 org.apache.flink.autoscaler.standalone.StandaloneAutoscalerEntrypoint \
---flinkClusterHost localhost \
---flinkClusterPort 8081
+--autoscaler.standalone.fetcher.flink-cluster.host localhost \
+--autoscaler.standalone.fetcher.flink-cluster.port 8081
 ```
 
-Updating the `flinkClusterHost` and `flinkClusterPort` based on your flink 
cluster.
-In general, the host and port are the same as Flink WebUI.
+Updating the `autoscaler.standalone.fetcher.flink-cluster.host` and 
`autoscaler.standalone.fetcher.flink-cluster.port` 
+based on your flink cluster. In general, the host and port are the same as 
Flink WebUI.
+
+### Using the JDBC Autoscaler State Store
+
+A driver dependency is required to connect to a specified database. Here are 
drivers currently supported, 
+please download JDBC driver and initialize database and table first.
+
+| Driver     | Group Id                   | Artifact Id            | JAR       
                                                                      | Schema  
                |
+|:-----------|:---------------------------|:-----------------------|:--------------------------------------------------------------------------------|-------------------------|
+| MySQL      | `mysql`                    | `mysql-connector-java` | 
[Download](https://repo.maven.apache.org/maven2/mysql/mysql-connector-java/)    
| [Table 
DDL](https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler-plugin-jdbc/src/main/resources/schema/mysql_schema.sql)
     |
+| PostgreSQL | `org.postgresql`           | `postgresql`           | 
[Download](https://jdbc.postgresql.org/download/)                               
| [Table 
DDL](https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler-plugin-jdbc/src/main/resources/schema/postgres_schema.sql)
  |
+| Derby      | `org.apache.derby`         | `derby`                | 
[Download](http://db.apache.org/derby/derby_downloads.html)                     
| [Table 
DDL](https://github.com/apache/flink-kubernetes-operator/blob/main/flink-autoscaler-plugin-jdbc/src/main/resources/schema/derby_schema.sql)
     |
+
+```
+JDBC_DRIVER_JAR=./mysql-connector-java-8.0.30.jar
+# export the password of jdbc state store
+export STATE_STORE_JDBC_PWD=123456
+
+java -cp flink-autoscaler-standalone-{{< version >}}.jar:${JDBC_DRIVER_JAR} \
+org.apache.flink.autoscaler.standalone.StandaloneAutoscalerEntrypoint \
+--autoscaler.standalone.fetcher.flink-cluster.host localhost \
+--autoscaler.standalone.fetcher.flink-cluster.port 8081 \
+--autoscaler.standalone.state-store.type jdbc \
+--autoscaler.standalone.state-store.jdbc.url 
jdbc:mysql://localhost:3306/flink_autoscaler \
+--autoscaler.standalone.state-store.jdbc.username root

Review Comment:
   In general, the environment variable name doesn't need to be changed. Users 
need to export the password using this environment variable.
   
   So I didn't mention `password-env-variable` here. But in the beginning of 
this doc, it mentioned how to export password. WDYT?
   
   ```
   # export the password of jdbc state store
   export STATE_STORE_JDBC_PWD=123456
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to