[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/6112 ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192699981 --- Diff: docs/dev/execution_configuration.md --- @@ -45,41 +45,41 @@ The following configuration options are available: (the default is bold) - **`enableClosureCleaner()`** / `disableClosureCleaner()`. The closure cleaner is enabled by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs. With the closure cleaner disabled, it might happen that an anonymous user function is referencing the surrounding class, which is usually not Serializable. This will lead to exceptions by the serializer. -- `getParallelism()` / `setParallelism(int parallelism)` Set the default parallelism for the job. +- `getParallelism()` / `setParallelism(int parallelism)`. Set the default parallelism for the job. --- End diff -- I would remove them. ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192697051 --- Diff: docs/dev/execution_configuration.md --- @@ -45,41 +45,41 @@ The following configuration options are available: (the default is bold) - **`enableClosureCleaner()`** / `disableClosureCleaner()`. The closure cleaner is enabled by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs. With the closure cleaner disabled, it might happen that an anonymous user function is referencing the surrounding class, which is usually not Serializable. This will lead to exceptions by the serializer. -- `getParallelism()` / `setParallelism(int parallelism)` Set the default parallelism for the job. +- `getParallelism()` / `setParallelism(int parallelism)`. Set the default parallelism for the job. --- End diff -- Some of the items has period some doesn't. should we remove period for the ones with? ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192693691 --- Diff: docs/ops/upgrading.md --- @@ -172,7 +172,7 @@ First major step in job migration is taking a savepoint of your job running in t You can do this with the command: {% highlight shell %} -$ bin/flink savepoint :jobId [:targetDirectory] +$ ./bin/flink savepoint [savepointDirectory] --- End diff -- will revert back! I was looking at same command on CLI page as uses `savepointDirectory` https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/cli.html#trigger-a-savepoint but I see `targetDirectory` has also been used for Savepoint documentarians ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192693807 --- Diff: docs/ops/filesystems.md --- @@ -112,10 +111,9 @@ To prevent inactive streams from taking up the complete pool (preventing new con `fs..limit.stream-timeout`. If a stream does not read/write any bytes for at least that amount of time, it is forcibly closed. These limits are enforced per TaskManager, so each TaskManager in a Flink application or cluster will open up to that number of connections. -In addition, the The limit are also enforced only per FileSystem instance. Because File Systems are created per scheme and authority, different +In addition, the limit are also enforced only per FileSystem instance. Because File Systems are created per scheme and authority, different --- End diff -- will change ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192690554 --- Diff: docs/ops/upgrading.md --- @@ -183,15 +183,15 @@ In this step, we update the framework version of the cluster. What this basicall the Flink installation with the new version. This step can depend on how you are running Flink in your cluster (e.g. standalone, on Mesos, ...). -If you are unfamiliar with installing Flink in your cluster, please read the [deployment and cluster setup documentation]({{ site.baseurl }}/ops/deployment/cluster_setup.html). +If you are unfamiliar with installing Flink in your cluster, please read the [clusters and deployment setup documentation]({{ site.baseurl }}/ops/deployment/cluster_setup.html). --- End diff -- will revert back! I waned to make it similar to the `Clusters and Deployment` page title ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192689975 --- Diff: docs/ops/security-ssl.md --- @@ -22,22 +22,22 @@ specific language governing permissions and limitations under the License. --> -This page provides instructions on how to enable SSL for the network communication between different flink components. +This page provides instructions on how to enable SSL for the network communication between different Flink components. ## SSL Configuration -SSL can be enabled for all network communication between flink components. SSL keystores and truststore has to be deployed on each flink node and configured (conf/flink-conf.yaml) using keys in the security.ssl.* namespace (Please see the [configuration page](config.html) for details). SSL can be selectively enabled/disabled for different transports using the following flags. These flags are only applicable when security.ssl.enabled is set to true. +SSL can be enabled for all network communication between Flink components. SSL keystores and truststore has to be deployed on each Flink node and configured (conf/flink-conf.yaml) using keys in the `security.ssl.*` namespace (Please see the [configuration page](config.html) for details). SSL can be selectively enabled/disabled for different transports using the following flags. These flags are only applicable when `security.ssl.enabled` is set to true. * **taskmanager.data.ssl.enabled**: SSL flag for data communication between task managers * **blob.service.ssl.enabled**: SSL flag for blob service client/server communication -* **akka.ssl.enabled**: SSL flag for the akka based control connection between the flink client, jobmanager and taskmanager -* **jobmanager.web.ssl.enabled**: Flag to enable https access to the jobmanager's web frontend +* **akka.ssl.enabled**: SSL flag for akka based control connection between the Flink client, JobManager and TaskManager --- End diff -- will revert back! I was reading other pages and seems there is some inconsistency on other pages as they used `JobManager` and `TaskManager` fromt ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192689067 --- Diff: docs/ops/filesystems.md --- @@ -70,21 +70,20 @@ That way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-c - **har** - ... - ## Common File System configurations The following configuration settings exist across different file systems Default File System -If paths to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. +If path to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. --- End diff -- will revert back ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192689122 --- Diff: docs/ops/filesystems.md --- @@ -70,21 +70,20 @@ That way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-c - **har** - ... - ## Common File System configurations The following configuration settings exist across different file systems Default File System -If paths to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. +If path to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. {% highlight yaml %} fs.default-scheme: {% endhighlight %} -For example, if the default file system configured as `fs.default-scheme: hdfs://localhost:9000/`, then a a file path of -`/user/hugo/in.txt'` is interpreted as `hdfs://localhost:9000/user/hugo/in.txt'` +For example, if the default file system configured as `fs.default-scheme: hdfs://localhost:9000/`, then a file path of +`'/user/hugo/in.txt'` is interpreted as `'hdfs://localhost:9000/user/hugo/in.txt'` --- End diff -- will do ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user medcv commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192689028 --- Diff: docs/internals/ide_setup.md --- @@ -89,7 +89,7 @@ IntelliJ supports checkstyle within the IDE using the Checkstyle-IDEA plugin. 3. Set the "Scan Scope" to "Only Java sources (including tests)". 4. Select _8.4_ in the "Checkstyle Version" dropdown and click apply. **This step is important, don't skip it!** -5. In the "Configuration File" pane, add a new configuration using the plus icon: +5. In the "Configuration File" page, add a new configuration using the plus icon: --- End diff -- will revert back! ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192666065 --- Diff: docs/ops/upgrading.md --- @@ -183,15 +183,15 @@ In this step, we update the framework version of the cluster. What this basicall the Flink installation with the new version. This step can depend on how you are running Flink in your cluster (e.g. standalone, on Mesos, ...). -If you are unfamiliar with installing Flink in your cluster, please read the [deployment and cluster setup documentation]({{ site.baseurl }}/ops/deployment/cluster_setup.html). +If you are unfamiliar with installing Flink in your cluster, please read the [clusters and deployment setup documentation]({{ site.baseurl }}/ops/deployment/cluster_setup.html). --- End diff -- this is an unnecessary change. ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192666792 --- Diff: docs/ops/upgrading.md --- @@ -172,7 +172,7 @@ First major step in job migration is taking a savepoint of your job running in t You can do this with the command: {% highlight shell %} -$ bin/flink savepoint :jobId [:targetDirectory] +$ ./bin/flink savepoint [savepointDirectory] --- End diff -- `savepointDirectory` has specific semantics in the documentation and is assumed to be the directory with which one can start a job from again. The directory passed here _does not fit these semantics_ as a `savepointDirectory` is created *within* the `targetDirectory`. Please revert. ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192667695 --- Diff: docs/ops/filesystems.md --- @@ -112,10 +111,9 @@ To prevent inactive streams from taking up the complete pool (preventing new con `fs..limit.stream-timeout`. If a stream does not read/write any bytes for at least that amount of time, it is forcibly closed. These limits are enforced per TaskManager, so each TaskManager in a Flink application or cluster will open up to that number of connections. -In addition, the The limit are also enforced only per FileSystem instance. Because File Systems are created per scheme and authority, different +In addition, the limit are also enforced only per FileSystem instance. Because File Systems are created per scheme and authority, different --- End diff -- either `the limit is` or the `the limits are` also, `only` should be placed before `enforced`. ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192663448 --- Diff: docs/ops/security-ssl.md --- @@ -22,22 +22,22 @@ specific language governing permissions and limitations under the License. --> -This page provides instructions on how to enable SSL for the network communication between different flink components. +This page provides instructions on how to enable SSL for the network communication between different Flink components. ## SSL Configuration -SSL can be enabled for all network communication between flink components. SSL keystores and truststore has to be deployed on each flink node and configured (conf/flink-conf.yaml) using keys in the security.ssl.* namespace (Please see the [configuration page](config.html) for details). SSL can be selectively enabled/disabled for different transports using the following flags. These flags are only applicable when security.ssl.enabled is set to true. +SSL can be enabled for all network communication between Flink components. SSL keystores and truststore has to be deployed on each Flink node and configured (conf/flink-conf.yaml) using keys in the `security.ssl.*` namespace (Please see the [configuration page](config.html) for details). SSL can be selectively enabled/disabled for different transports using the following flags. These flags are only applicable when `security.ssl.enabled` is set to true. * **taskmanager.data.ssl.enabled**: SSL flag for data communication between task managers * **blob.service.ssl.enabled**: SSL flag for blob service client/server communication -* **akka.ssl.enabled**: SSL flag for the akka based control connection between the flink client, jobmanager and taskmanager -* **jobmanager.web.ssl.enabled**: Flag to enable https access to the jobmanager's web frontend +* **akka.ssl.enabled**: SSL flag for akka based control connection between the Flink client, JobManager and TaskManager --- End diff -- we typically write `jobmanager` and `taskmanager`, please revert ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192662979 --- Diff: docs/ops/filesystems.md --- @@ -70,21 +70,20 @@ That way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-c - **har** - ... - ## Common File System configurations The following configuration settings exist across different file systems Default File System -If paths to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. +If path to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. {% highlight yaml %} fs.default-scheme: {% endhighlight %} -For example, if the default file system configured as `fs.default-scheme: hdfs://localhost:9000/`, then a a file path of -`/user/hugo/in.txt'` is interpreted as `hdfs://localhost:9000/user/hugo/in.txt'` +For example, if the default file system configured as `fs.default-scheme: hdfs://localhost:9000/`, then a file path of +`'/user/hugo/in.txt'` is interpreted as `'hdfs://localhost:9000/user/hugo/in.txt'` --- End diff -- I suggest removing the `'` instead, ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192662821 --- Diff: docs/ops/filesystems.md --- @@ -70,21 +70,20 @@ That way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-c - **har** - ... - ## Common File System configurations The following configuration settings exist across different file systems Default File System -If paths to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. +If path to files do not explicitly specify a file system scheme (and authority), a default scheme (and authority) will be used. --- End diff -- not a typo ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192662153 --- Diff: docs/dev/execution_configuration.md --- @@ -45,41 +45,41 @@ The following configuration options are available: (the default is bold) - **`enableClosureCleaner()`** / `disableClosureCleaner()`. The closure cleaner is enabled by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs. With the closure cleaner disabled, it might happen that an anonymous user function is referencing the surrounding class, which is usually not Serializable. This will lead to exceptions by the serializer. -- `getParallelism()` / `setParallelism(int parallelism)` Set the default parallelism for the job. +- `getParallelism()` / `setParallelism(int parallelism)`. Set the default parallelism for the job. --- End diff -- The period doesn't make sense as it isn't a sentence. ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/6112#discussion_r192662387 --- Diff: docs/internals/ide_setup.md --- @@ -89,7 +89,7 @@ IntelliJ supports checkstyle within the IDE using the Checkstyle-IDEA plugin. 3. Set the "Scan Scope" to "Only Java sources (including tests)". 4. Select _8.4_ in the "Checkstyle Version" dropdown and click apply. **This step is important, don't skip it!** -5. In the "Configuration File" pane, add a new configuration using the plus icon: +5. In the "Configuration File" page, add a new configuration using the plus icon: --- End diff -- this is not a typo ---
[GitHub] flink pull request #6112: [FLINK-9508][Docs]General Spell Check on Flink Doc...
GitHub user medcv opened a pull request: https://github.com/apache/flink/pull/6112 [FLINK-9508][Docs]General Spell Check on Flink Docs ## What is the purpose of the change General spell check for Flink docs ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) You can merge this pull request into a Git repository by running: $ git pull https://github.com/medcv/flink FLINK-9508 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/6112.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #6112 commit 3695a59f91352eb83b0dddc9f8ff8b54b6e98a32 Author: Yadan.JS Date: 2018-05-29T03:13:59Z [FLINK-9508][Docs]General Spell Check on Flink Docs ---