hapihu commented on a change in pull request #16295:
URL: https://github.com/apache/flink/pull/16295#discussion_r664029434
##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -226,191 +220,179 @@ Using standalone source with error rate 0.000000 and
sleep delay 1 millis
Job has been submitted with JobID 97b20a0a8ffd5c1d656328b0cd6436a6
```
-See how the command is equal to the [initial run command](#submitting-a-job)
except for the
-`--fromSavepoint` parameter which is used to refer to the state of the
-[previously stopped
job](#stopping-a-job-gracefully-creating-a-final-savepoint). A new JobID is
-generated that can be used to maintain the job.
+请注意,该命令除了使用 `-fromSavepoint`
参数关联[之前停止作业](#stopping-a-job-gracefully-creating-a-final-savepoint)的状态外,其它参数都与[初始
run 命令](#submitting-a-job)相同。该操作会生成一个新的 JobID,用于维护作业的运行。
+
-By default, we try to match the whole savepoint state to the job being
submitted. If you want to
-allow to skip savepoint state that cannot be restored with the new job you can
set the
-`--allowNonRestoredState` flag. You need to allow this if you removed an
operator from your program
-that was part of the program when the savepoint was triggered and you still
want to use the savepoint.
+默认情况下,Flink 尝试将新提交的作业恢复到完整的 savepoint 状态。如果你想忽略不能随新作业恢复的 savepoint 状态,可以设置
`--allowNonRestoredState` 标志。当你删除了程序的某个操作,同时该操作是创建 savepoint
时对应程序的一部分,这种情况下,如果你仍想使用 savepoint,就需要设置此参数。
```bash
$ ./bin/flink run \
--fromSavepoint <savepointPath> \
--allowNonRestoredState ...
```
-This is useful if your program dropped an operator that was part of the
savepoint.
+如果你的程序删除了相应 savepoint 的部分运算操作,使用该选项将很有帮助。
{{< top >}}
-## CLI Actions
+<a name="cli-actions"> </a>
+
+## CLI 操作
+
+以下是 Flink CLI 工具支持操作的概览:
-Here's an overview of actions supported by Flink's CLI tool:
<table class="table table-bordered">
<thead>
<tr>
- <th class="text-left" style="width: 25%">Action</th>
- <th class="text-left" style="width: 50%">Purpose</th>
+ <th class="text-left" style="width: 25%">操作</th>
+ <th class="text-left" style="width: 50%">目的</th>
</tr>
</thead>
<tbody>
<tr>
<td><code class="highlighter-rouge">run</code></td>
<td>
- This action executes jobs. It requires at least the jar
containing the job. Flink-
- or job-related arguments can be passed if necessary.
+ 该操作用于执行作业。必须指定包含作业的 jar 包。如有必要,可以传递与 Flink 或作业相关的参数。
</td>
</tr>
<tr>
<td><code class="highlighter-rouge">run-application</code></td>
<td>
- This action executes jobs in <a href="{{< ref
"docs/deployment/overview" >}}#application-mode">
- Application Mode</a>. Other than that, it requires the same
parameters as the
- <code class="highlighter-rouge">run</code> action.
+ 该操作用于在 <a href="{{< ref "docs/deployment/overview"
>}}#application-mode">Application 模式</a>下执行作业。除此之外,它与 <code
class="highlighter-rouge">run</code> 操作的参数相同。
</td>
</tr>
<tr>
<td><code class="highlighter-rouge">info</code></td>
<td>
- This action can be used to print an optimized execution graph
of the passed job. Again,
- the jar containing the job needs to be passed.
+ 该操作用于打印作业相关的优化执行图。同样需要指定包含作业的 jar。
</td>
</tr>
<tr>
<td><code class="highlighter-rouge">list</code></td>
<td>
- This action lists all running or scheduled jobs.
+ 该操作用于列出所有正在运行或调度中的作业。
</td>
</tr>
<tr>
<td><code class="highlighter-rouge">savepoint</code></td>
<td>
- This action can be used to create or disposing savepoints for
a given job. It might be
- necessary to specify a savepoint directory besides the JobID,
if the
- <a href="{{< ref "docs/deployment/config"
>}}#state-savepoints-dir">state.savepoints.dir</a>
- parameter was not specified in <code
class="highlighter-rouge">conf/flink-config.yaml</code>.
+ 该操作用于为指定的作业创建或废弃 savepoint。如果在 <code
class="highlighter-rouge">conf/flink-config.yaml</code> 中没有指定 <a href="{{< ref
"docs/deployment/config" >}}#state-savepoints-dir">state.savepoints.dir</a>
参数,那么除了指定 JobID 之外还需要指定 savepoint 目录。
</td>
</tr>
<tr>
<td><code class="highlighter-rouge">cancel</code></td>
<td>
- This action can be used to cancel running jobs based on their
JobID.
+ 该操作用于根据作业 JobID 取消正在运行的作业。
</td>
</tr>
<tr>
<td><code class="highlighter-rouge">stop</code></td>
<td>
- This action combines the <code
class="highlighter-rouge">cancel</code> and
- <code class="highlighter-rouge">savepoint</code> actions to
stop a running job
- but also create a savepoint to start from again.
+ 该操作结合了 <code class="highlighter-rouge">cancel</code> 和 <code
class="highlighter-rouge">savepoint</code> 的功能,停止运行作业的同时会创建用于恢复作业的 savepoint 。
</td>
</tr>
</tbody>
</table>
-A more fine-grained description of all actions and their parameters can be
accessed through `bin/flink --help`
-or the usage information of each individual action `bin/flink <action> --help`.
+
+可以通过 `bin/flink --help` 查看所有支持的操作以及操作相关参数的详细信息,也可以通过 `bin/flink <action>
--help` 单独查看指定操作的使用信息。
{{< top >}}
-## Advanced CLI
-
+<a name="advanced-cli"> </a>
+
+## 高级的 CLI
+
+<a name="rest-api"> </a>
+
### REST API
-The Flink cluster can be also managed using the [REST API]({{< ref
"docs/ops/rest_api" >}}). The commands
-described in previous sections are a subset of what is offered by Flink's REST
endpoints. Therefore,
-tools like `curl` can be used to get even more out of Flink.
+Flink 集群也可以使用 [REST API]({{< ref "docs/ops/rest_api" >}}) 进行管理。前面章节描述的命令是
Flink REST 服务端支持命令的子集。
+
+因此,可以使用 `curl` 之类的工具来进一步发挥 Flink 的作用。
+
+<a name="selecting-deployment-targets"> </a>
+
+### 选择部署方式
-### Selecting Deployment Targets
+Flink 兼容多种集群管理框架,例如 [Kubernetes]({{< ref
"docs/deployment/resource-providers/native_kubernetes" >}}) 和 [YARN]({{< ref
"docs/deployment/resource-providers/yarn" >}}),在 Resource Provider
章节有更详细的描述。可以在不同的 [Deployment Modes]({{< ref "docs/deployment/overview"
>}}#deployment-modes) 下提交作业。作业提交相关的参数化因底层框架和部署模式的不同而不同。
-Flink is compatible with multiple cluster management frameworks like
-[Kubernetes]({{< ref "docs/deployment/resource-providers/native_kubernetes"
>}}) or
-[YARN]({{< ref "docs/deployment/resource-providers/yarn" >}}) which are
described in more detail in the
-Resource Provider section. Jobs can be submitted in different [Deployment
Modes]({{< ref "docs/deployment/overview" >}}#deployment-modes).
-The parameterization of a job submission differs based on the underlying
framework and Deployment Mode.
+`bin/flink` 提供了`--target` 参数来设置不同的选项。除此之外,仍然必须使用 `run`(针对 [Session]({{< ref
"docs/deployment/overview" >}}#session-mode) 和 [Per-Job Mode]({{< ref
"docs/deployment/overview" >}}#per-job-mode))或 `run-application` (针对
[Application Mode]({{< ref "docs/deployment/overview"
>}}#application-mode))提交作业。
+
+下面的参数组合的总结:
-`bin/flink` offers a parameter `--target` to handle the different options. In
addition to that, jobs
-have to be submitted using either `run` (for [Session]({{< ref
"docs/deployment/overview" >}}#session-mode)
-and [Per-Job Mode]({{< ref "docs/deployment/overview" >}}#per-job-mode)) or
`run-application` (for
-[Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode)).
See the following summary of
-parameter combinations:
* YARN
- * `./bin/flink run --target yarn-session`: Submission to an already running
Flink on YARN cluster
- * `./bin/flink run --target yarn-per-job`: Submission spinning up a Flink on
YARN cluster in Per-Job Mode
- * `./bin/flink run-application --target yarn-application`: Submission
spinning up Flink on YARN cluster in Application Mode
+ * `./bin/flink run --target yarn-session`: 将作业以 `Session` 模式提交到 YARN 集群上运行的
Flink。
+ * `./bin/flink run --target yarn-per-job`: 将作业以 `Per-Job` 模式提交到 Flink,会基于
YARN 集群新启动一个对应 Flink。
+ * `./bin/flink run-application --target yarn-application`: 将作业以
`yarn-application` 模式提交到 Flink,会基于 YARN 集群新启动一个对应 Flink。
* Kubernetes
- * `./bin/flink run --target kubernetes-session`: Submission to an already
running Flink on Kubernetes cluster
- * `./bin/flink run-application --target kubernetes-application`: Submission
spinning up a Flink on Kubernetes cluster in Application Mode
+ * `./bin/flink run --target kubernetes-session`: 将作业以 `Session` 模式提交
Kubernetes 集群上运行的 Flink。
+ * `./bin/flink run-application --target kubernetes-application`: 将作业以
`yarn-application` 模式提交到 Flink,会基于 Kubernetes 集群新启动一个对应 Flink 。
Review comment:
OK,已改成如下方式:
作业以 `yarn-application` 模式提交,会基于 Kubernetes 集群新启动一个对应 Flink Job。
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]