XComp commented on a change in pull request #14272:
URL: https://github.com/apache/flink/pull/14272#discussion_r533364585
##########
File path: docs/deployment/resource-providers/yarn.zh.md
##########
@@ -78,7 +78,7 @@ Congratulations! You have successfully run a Flink
application by deploying Flin
## Deployment Modes Supported by Flink on YARN
-For production use, we recommend deploying Flink Applications in the [Per-job
or Application Mode]({% link deployment/index.md %}#deployment-modes), as these
modes provide a better isolation for the Applications.
+For production use, we recommend deploying Flink Applications in the [Per-job
or Application Mode]({% link deployment/index.zh.md %}#deployment-modes), as
these modes provide a better isolation for the Applications.
Review comment:
```suggestion
For production use, we recommend deploying Flink Applications in the
[Per-Job or Application Mode]({% link deployment/index.zh.md
%}#deployment-modes), as these modes provide a better isolation for the
Applications.
```
##########
File path: docs/dev/batch/hadoop_compatibility.md
##########
@@ -64,7 +64,17 @@ and Reducers.
</dependency>
{% endhighlight %}
-See also **[how to configure hadoop dependencies]({{ site.baseurl
}}/deployment/resource-providers/hadoop.html#add-hadoop-classpaths)**.
+If you want to run your Flink application locally (from your IDE), you also
need to add
Review comment:
```suggestion
If you want to run your Flink application locally (e.g. from your IDE), you
also need to add
```
##########
File path: docs/dev/project-configuration.zh.md
##########
@@ -152,8 +152,12 @@ for details on how to build Flink for a specific Scala
version.
*(The only exception being when using existing Hadoop input-/output formats
with Flink's Hadoop compatibility wrappers)*
If you want to use Flink with Hadoop, you need to have a Flink setup that
includes the Hadoop dependencies, rather than
-adding Hadoop as an application dependency. Please refer to the [Hadoop Setup
Guide]({{ site.baseurl }}/deployment/resource-providers/hadoop.html)
-for details.
+adding Hadoop as an application dependency. Flink will use the Hadoop
dependencies specified by the `HADOOP_CLASSPATH`
+environment variable, which can usually be set by calling:
Review comment:
```suggestion
environment variable, which can be set in the following way:
```
Minor thing: It's just that I would "call" a program but not a variable
setting.
##########
File path: docs/dev/project-configuration.md
##########
@@ -152,8 +152,12 @@ for details on how to build Flink for a specific Scala
version.
*(The only exception being when using existing Hadoop input-/output formats
with Flink's Hadoop compatibility wrappers)*
If you want to use Flink with Hadoop, you need to have a Flink setup that
includes the Hadoop dependencies, rather than
-adding Hadoop as an application dependency. Please refer to the [Hadoop Setup
Guide]({{ site.baseurl }}/deployment/resource-providers/hadoop.html)
-for details.
+adding Hadoop as an application dependency. Flink will use the Hadoop
dependencies specified by the `HADOOP_CLASSPATH`
+environment variable, which can usually be set by calling
Review comment:
```suggestion
environment variable, which can be set in the following way:
```
Minor thing: It's just that I would "call" a program but not a variable
setting.
##########
File path: docs/dev/batch/hadoop_compatibility.zh.md
##########
@@ -64,7 +64,17 @@ and Reducers.
</dependency>
{% endhighlight %}
-See also **[how to configure hadoop dependencies]({{ site.baseurl
}}/deployment/resource-providers/hadoop.html#add-hadoop-classpaths)**.
+If you want to run your Flink application locally (from your IDE), you also
need to add
Review comment:
```suggestion
If you want to run your Flink application locally (e.g. from your IDE), you
also need to add
```
##########
File path: docs/dev/table/connectors/hive/index.md
##########
@@ -92,8 +92,11 @@ to make the integration work in Table API program or SQL in
SQL Client.
Alternatively, you can put these dependencies in a dedicated folder, and add
them to classpath with the `-C`
or `-l` option for Table API program or SQL Client respectively.
-Apache Hive is built on Hadoop, so you need Hadoop dependency first, please
refer to
-[Providing Hadoop classes]({{ site.baseurl
}}/deployment/resource-providers/hadoop.html#providing-hadoop-classes).
+Apache Hive is built on Hadoop, so you need to provide Hadoop dependenies, by
setting the `HADOOP_CLASSPATH`
Review comment:
```suggestion
Apache Hive is built on Hadoop, so you need to provide Hadoop dependencies
by setting the `HADOOP_CLASSPATH`
```
##########
File path: docs/deployment/resource-providers/yarn.zh.md
##########
@@ -78,7 +78,7 @@ Congratulations! You have successfully run a Flink
application by deploying Flin
## Deployment Modes Supported by Flink on YARN
-For production use, we recommend deploying Flink Applications in the [Per-job
or Application Mode]({% link deployment/index.md %}#deployment-modes), as these
modes provide a better isolation for the Applications.
+For production use, we recommend deploying Flink Applications in the [Per-job
or Application Mode]({% link deployment/index.zh.md %}#deployment-modes), as
these modes provide a better isolation for the Applications.
Review comment:
There's another occurrence in the first sentence of the `Per-Job Cluster
Mode` section.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]