Repository: flink
Updated Branches:
  refs/heads/release-1.2 3b4f6cf8c -> e9ada34f2


[FLINK-5751] [docs] Fix some broken links


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/e9ada34f
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/e9ada34f
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/e9ada34f

Branch: refs/heads/release-1.2
Commit: e9ada34f2d30c0062953fda9d994630631b2e6e7
Parents: 3b4f6cf
Author: Patrick Lucas <m...@patricklucas.com>
Authored: Wed Feb 15 17:14:43 2017 -0800
Committer: Ufuk Celebi <u...@apache.org>
Committed: Thu Feb 16 10:10:21 2017 +0100

----------------------------------------------------------------------
 docs/dev/batch/examples.md                |  2 +-
 docs/dev/batch/index.md                   | 18 +++++++++---------
 docs/dev/datastream_api.md                | 10 +++++-----
 docs/dev/execution_configuration.md       |  6 +++---
 docs/dev/stream/queryable_state.md        |  2 +-
 docs/dev/stream/state.md                  |  2 +-
 docs/dev/table_api.md                     |  2 +-
 docs/dev/types_serialization.md           |  2 +-
 docs/examples/index.md                    |  4 ++--
 docs/quickstart/run_example_quickstart.md |  4 ++--
 docs/setup/config.md                      |  4 ++--
 11 files changed, 28 insertions(+), 28 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/batch/examples.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/examples.md b/docs/dev/batch/examples.md
index 7b6132c..fe478f8 100644
--- a/docs/dev/batch/examples.md
+++ b/docs/dev/batch/examples.md
@@ -25,7 +25,7 @@ under the License.
 
 The following example programs showcase different applications of Flink
 from simple word counting to graph algorithms. The code samples illustrate the
-use of [Flink's DataSet API](/dev/batch/index.html).
+use of [Flink's DataSet API]({{ site.baseurl }}/dev/batch/index.html).
 
 The full source code of the following and more examples can be found in the 
__flink-examples-batch__
 or __flink-examples-streaming__ module of the Flink source repository.

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/batch/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/index.md b/docs/dev/batch/index.md
index d171a89..6fccc04 100644
--- a/docs/dev/batch/index.md
+++ b/docs/dev/batch/index.md
@@ -272,7 +272,7 @@ DataSet<Tuple3<Integer, String, Double>> output = 
input.sum(0).andMin(2);
         Joins two data sets by creating all pairs of elements that are equal 
on their keys.
         Optionally uses a JoinFunction to turn the pair of elements into a 
single element, or a
         FlatJoinFunction to turn the pair of elements into arbitrarily many 
(including none)
-        elements. See the <a href="/dev/api_concepts#specifying-keys">keys 
section</a> to learn how to define join keys.
+        elements. See the <a href="{{ site.baseurl 
}}/dev/api_concepts.html#specifying-keys">keys section</a> to learn how to 
define join keys.
 {% highlight java %}
 result = input1.join(input2)
                .where(0)       // key of the first input (tuple field 0)
@@ -298,7 +298,7 @@ result = input1.join(input2, JoinHint.BROADCAST_HASH_FIRST)
     <tr>
       <td><strong>OuterJoin</strong></td>
       <td>
-        Performs a left, right, or full outer join on two data sets. Outer 
joins are similar to regular (inner) joins and create all pairs of elements 
that are equal on their keys. In addition, records of the "outer" side (left, 
right, or both in case of full) are preserved if no matching key is found in 
the other side. Matching pairs of elements (or one element and a 
<code>null</code> value for the other input) are given to a JoinFunction to 
turn the pair of elements into a single element, or to a FlatJoinFunction to 
turn the pair of elements into arbitrarily many (including none)         
elements. See the <a href="/dev/api_concepts#specifying-keys">keys section</a> 
to learn how to define join keys.
+        Performs a left, right, or full outer join on two data sets. Outer 
joins are similar to regular (inner) joins and create all pairs of elements 
that are equal on their keys. In addition, records of the "outer" side (left, 
right, or both in case of full) are preserved if no matching key is found in 
the other side. Matching pairs of elements (or one element and a 
<code>null</code> value for the other input) are given to a JoinFunction to 
turn the pair of elements into a single element, or to a FlatJoinFunction to 
turn the pair of elements into arbitrarily many (including none)         
elements. See the <a href="{{ site.baseurl 
}}/dev/api_concepts.html#specifying-keys">keys section</a> to learn how to 
define join keys.
 {% highlight java %}
 input1.leftOuterJoin(input2) // rightOuterJoin or fullOuterJoin for right or 
full outer joins
       .where(0)              // key of the first input (tuple field 0)
@@ -320,7 +320,7 @@ input1.leftOuterJoin(input2) // rightOuterJoin or 
fullOuterJoin for right or ful
       <td>
         <p>The two-dimensional variant of the reduce operation. Groups each 
input on one or more
         fields and then joins the groups. The transformation function is 
called per pair of groups.
-        See the <a href="/dev/api_concepts#specifying-keys">keys section</a> 
to learn how to define coGroup keys.</p>
+        See the <a href="{{ site.baseurl 
}}/dev/api_concepts.html#specifying-keys">keys section</a> to learn how to 
define coGroup keys.</p>
 {% highlight java %}
 data1.coGroup(data2)
      .where(0)
@@ -592,7 +592,7 @@ val output: DataSet[(Int, String, Double)] = 
input.sum(0).min(2)
         Joins two data sets by creating all pairs of elements that are equal 
on their keys.
         Optionally uses a JoinFunction to turn the pair of elements into a 
single element, or a
         FlatJoinFunction to turn the pair of elements into arbitrarily many 
(including none)
-        elements. See the <a href="/dev/api_concepts#specifying-keys">keys 
section</a> to learn how to define join keys.
+        elements. See the <a href="{{ site.baseurl 
}}/dev/api_concepts.html#specifying-keys">keys section</a> to learn how to 
define join keys.
 {% highlight scala %}
 // In this case tuple fields are used as keys. "0" is the join field on the 
first tuple
 // "1" is the join field on the second tuple.
@@ -618,7 +618,7 @@ val result = input1.join(input2, 
JoinHint.BROADCAST_HASH_FIRST)
     <tr>
       <td><strong>OuterJoin</strong></td>
       <td>
-        Performs a left, right, or full outer join on two data sets. Outer 
joins are similar to regular (inner) joins and create all pairs of elements 
that are equal on their keys. In addition, records of the "outer" side (left, 
right, or both in case of full) are preserved if no matching key is found in 
the other side. Matching pairs of elements (or one element and a `null` value 
for the other input) are given to a JoinFunction to turn the pair of elements 
into a single element, or to a FlatJoinFunction to turn the pair of elements 
into arbitrarily many (including none)         elements. See the <a 
href="/dev/api_concepts#specifying-keys">keys section</a> to learn how to 
define join keys.
+        Performs a left, right, or full outer join on two data sets. Outer 
joins are similar to regular (inner) joins and create all pairs of elements 
that are equal on their keys. In addition, records of the "outer" side (left, 
right, or both in case of full) are preserved if no matching key is found in 
the other side. Matching pairs of elements (or one element and a `null` value 
for the other input) are given to a JoinFunction to turn the pair of elements 
into a single element, or to a FlatJoinFunction to turn the pair of elements 
into arbitrarily many (including none)         elements. See the <a href="{{ 
site.baseurl }}/dev/api_concepts.html#specifying-keys">keys section</a> to 
learn how to define join keys.
 {% highlight scala %}
 val joined = left.leftOuterJoin(right).where(0).equalTo(1) {
    (left, right) =>
@@ -634,7 +634,7 @@ val joined = left.leftOuterJoin(right).where(0).equalTo(1) {
       <td>
         <p>The two-dimensional variant of the reduce operation. Groups each 
input on one or more
         fields and then joins the groups. The transformation function is 
called per pair of groups.
-        See the <a href="/dev/api_concepts#specifying-keys">keys section</a> 
to learn how to define coGroup keys.</p>
+        See the <a href="{{ site.baseurl 
}}/dev/api_concepts.html#specifying-keys">keys section</a> to learn how to 
define coGroup keys.</p>
 {% highlight scala %}
 data1.coGroup(data2).where(0).equalTo(1)
 {% endhighlight %}
@@ -775,12 +775,12 @@ data.map {
   case (id, name, temperature) => // [...]
 }
 {% endhighlight %}
-is not supported by the API out-of-the-box. To use this feature, you should 
use a <a href="../scala_api_extensions.html">Scala API extension</a>.
+is not supported by the API out-of-the-box. To use this feature, you should 
use a <a href="{{ site.baseurl }}/dev/scala_api_extensions.html">Scala API 
extension</a>.
 
 </div>
 </div>
 
-The [parallelism]({{ site.baseurl }}/dev/parallel) of a transformation can be 
defined by `setParallelism(int)` while
+The [parallelism]({{ site.baseurl }}/dev/parallel.html) of a transformation 
can be defined by `setParallelism(int)` while
 `name(String)` assigns a custom name to a transformation which is helpful for 
debugging. The same is
 possible for [Data Sources](#data-sources) and [Data Sinks](#data-sinks).
 
@@ -2103,7 +2103,7 @@ val result: DataSet[Integer] = input.map(new MyMapper())
 env.execute()
 {% endhighlight %}
 
-Access the cached file in a user function (here a `MapFunction`). The function 
must extend a [RichFunction]({{ site.baseurl 
}}/dev/api_concepts#rich-functions) class because it needs access to the 
`RuntimeContext`.
+Access the cached file in a user function (here a `MapFunction`). The function 
must extend a [RichFunction]({{ site.baseurl 
}}/dev/api_concepts.html#rich-functions) class because it needs access to the 
`RuntimeContext`.
 
 {% highlight scala %}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/datastream_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/datastream_api.md b/docs/dev/datastream_api.md
index f9f060b..df13295 100644
--- a/docs/dev/datastream_api.md
+++ b/docs/dev/datastream_api.md
@@ -210,7 +210,7 @@ dataStream.filter(new FilterFunction<Integer>() {
           <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
           <td>
             <p>Logically partitions a stream into disjoint partitions, each 
partition containing elements of the same key.
-            Internally, this is implemented with hash partitioning. See <a 
href="/dev/api_concepts#specifying-keys">keys</a> on how to specify keys.
+            Internally, this is implemented with hash partitioning. See <a 
href="{{ site.baseurl }}/dev/api_concepts.html#specifying-keys">keys</a> on how 
to specify keys.
             This transformation returns a KeyedDataStream.</p>
     {% highlight java %}
 dataStream.keyBy("someKey") // Key by field "someKey"
@@ -597,7 +597,7 @@ dataStream.filter { _ != 0 }
           <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
           <td>
             <p>Logically partitions a stream into disjoint partitions, each 
partition containing elements of the same key.
-            Internally, this is implemented with hash partitioning. See <a 
href="/dev/api_concepts#specifying-keys">keys</a> on how to specify keys.
+            Internally, this is implemented with hash partitioning. See <a 
href="{{ site.baseurl }}/dev/api_concepts.html#specifying-keys">keys</a> on how 
to specify keys.
             This transformation returns a KeyedDataStream.</p>
     {% highlight scala %}
 dataStream.keyBy("someKey") // Key by field "someKey"
@@ -874,7 +874,7 @@ data.map {
   case (id, name, temperature) => // [...]
 }
 {% endhighlight %}
-is not supported by the API out-of-the-box. To use this feature, you should 
use a <a href="./scala_api_extensions.html">Scala API extension</a>.
+is not supported by the API out-of-the-box. To use this feature, you should 
use a <a href="scala_api_extensions.html">Scala API extension</a>.
 
 
 </div>
@@ -1582,7 +1582,7 @@ Execution Parameters
 
 The `StreamExecutionEnvironment` contains the `ExecutionConfig` which allows 
to set job specific configuration values for the runtime.
 
-Please refer to [execution configuration]({{ site.baseurl 
}}/dev/execution_configuration)
+Please refer to [execution configuration]({{ site.baseurl 
}}/dev/execution_configuration.html)
 for an explanation of most parameters. These parameters pertain specifically 
to the DataStream API:
 
 - `enableTimestamps()` / **`disableTimestamps()`**: Attach a timestamp to each 
event emitted from a source.
@@ -1595,7 +1595,7 @@ for an explanation of most parameters. These parameters 
pertain specifically to
 
 ### Fault Tolerance
 
-[State & Checkpointing]({{ site.baseurl }}/dev/stream/checkpointing) describes 
how to enable and configure Flink's checkpointing mechanism.
+[State & Checkpointing]({{ site.baseurl }}/dev/stream/checkpointing.html) 
describes how to enable and configure Flink's checkpointing mechanism.
 
 ### Controlling Latency
 

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/execution_configuration.md
----------------------------------------------------------------------
diff --git a/docs/dev/execution_configuration.md 
b/docs/dev/execution_configuration.md
index 50a9f76..94e788c 100644
--- a/docs/dev/execution_configuration.md
+++ b/docs/dev/execution_configuration.md
@@ -23,7 +23,7 @@ under the License.
 -->
 
 The `StreamExecutionEnvironment` contains the `ExecutionConfig` which allows 
to set job specific configuration values for the runtime.
-To change the defaults that affect all jobs, see [Configuration]({{ 
site.baseurl }}/setup/config).
+To change the defaults that affect all jobs, see [Configuration]({{ 
site.baseurl }}/setup/config.html).
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
@@ -49,9 +49,9 @@ With the closure cleaner disabled, it might happen that an 
anonymous user functi
 
 - `getMaxParallelism()` / `setMaxParallelism(int parallelism)` Set the default 
maximum parallelism for the job. This setting determines the maximum degree of 
parallelism and specifies the upper limit for dynamic scaling.
 
-- `getNumberOfExecutionRetries()` / `setNumberOfExecutionRetries(int 
numberOfExecutionRetries)` Sets the number of times that failed tasks are 
re-executed. A value of zero effectively disables fault tolerance. A value of 
`-1` indicates that the system default value (as defined in the configuration) 
should be used. This is deprecated, use [restart strategies]({{ site.baseurl 
}}/dev/restart_strategies) instead.
+- `getNumberOfExecutionRetries()` / `setNumberOfExecutionRetries(int 
numberOfExecutionRetries)` Sets the number of times that failed tasks are 
re-executed. A value of zero effectively disables fault tolerance. A value of 
`-1` indicates that the system default value (as defined in the configuration) 
should be used. This is deprecated, use [restart strategies]({{ site.baseurl 
}}/dev/restart_strategies.html) instead.
 
-- `getExecutionRetryDelay()` / `setExecutionRetryDelay(long 
executionRetryDelay)` Sets the delay in milliseconds that the system waits 
after a job has failed, before re-executing it. The delay starts after all 
tasks have been successfully been stopped on the TaskManagers, and once the 
delay is past, the tasks are re-started. This parameter is useful to delay 
re-execution in order to let certain time-out related failures surface fully 
(like broken connections that have not fully timed out), before attempting a 
re-execution and immediately failing again due to the same problem. This 
parameter only has an effect if the number of execution re-tries is one or 
more. This is deprecated, use [restart strategies]({{ site.baseurl 
}}/dev/restart_strategies) instead.
+- `getExecutionRetryDelay()` / `setExecutionRetryDelay(long 
executionRetryDelay)` Sets the delay in milliseconds that the system waits 
after a job has failed, before re-executing it. The delay starts after all 
tasks have been successfully been stopped on the TaskManagers, and once the 
delay is past, the tasks are re-started. This parameter is useful to delay 
re-execution in order to let certain time-out related failures surface fully 
(like broken connections that have not fully timed out), before attempting a 
re-execution and immediately failing again due to the same problem. This 
parameter only has an effect if the number of execution re-tries is one or 
more. This is deprecated, use [restart strategies]({{ site.baseurl 
}}/dev/restart_strategies.html) instead.
 
 - `getExecutionMode()` / `setExecutionMode()`. The default execution mode is 
PIPELINED. Sets the execution mode to execute the program. The execution mode 
defines whether data exchanges are performed in a batch or on a pipelined 
manner.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/stream/queryable_state.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/queryable_state.md 
b/docs/dev/stream/queryable_state.md
index 728d4d6..7d337dc 100644
--- a/docs/dev/stream/queryable_state.md
+++ b/docs/dev/stream/queryable_state.md
@@ -32,7 +32,7 @@ under the License.
 </div>
 
 In a nutshell, this feature allows users to query Flink's managed partitioned 
state
-(see [Working with State]({{ site.baseurl }}/dev/stream/state)) from outside of
+(see [Working with State]({{ site.baseurl }}/dev/stream/state.html)) from 
outside of
 Flink. For some scenarios, queryable state thus eliminates the need for 
distributed
 operations/transactions with external systems such as key-value stores which 
are often the
 bottleneck in practice.

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/stream/state.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/state.md b/docs/dev/stream/state.md
index 2472226..e554e29 100644
--- a/docs/dev/stream/state.md
+++ b/docs/dev/stream/state.md
@@ -139,7 +139,7 @@ want to retrieve, you create either a 
`ValueStateDescriptor`, a `ListStateDescri
 a `ReducingStateDescriptor` or a `FoldingStateDescriptor`.
 
 State is accessed using the `RuntimeContext`, so it is only possible in *rich 
functions*.
-Please see [here]({{ site.baseurl }}/dev/api_concepts#rich-functions) for
+Please see [here]({{ site.baseurl }}/dev/api_concepts.html#rich-functions) for
 information about that, but we will also see an example shortly. The 
`RuntimeContext` that
 is available in a `RichFunction` has these methods for accessing state:
 

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/table_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/table_api.md b/docs/dev/table_api.md
index 80b61f9..8efdc74 100644
--- a/docs/dev/table_api.md
+++ b/docs/dev/table_api.md
@@ -479,7 +479,7 @@ A registered table can be accessed from a 
`TableEnvironment` as follows:
 ### Table API Operators
 
 The Table API features a domain-specific language to execute 
language-integrated queries on structured data in Scala and Java.
-This section gives a brief overview of the available operators. You can find 
more details of operators in the 
[Javadoc]({{site.baseurl}}/api/java/org/apache/flink/table/api/Table.html).
+This section gives a brief overview of the available operators. You can find 
more details of operators in the 
[Javadoc](http://flink.apache.org/docs/latest/api/java/org/apache/flink/table/api/Table.html).
 
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/dev/types_serialization.md
----------------------------------------------------------------------
diff --git a/docs/dev/types_serialization.md b/docs/dev/types_serialization.md
index 2b43563..e723c33 100644
--- a/docs/dev/types_serialization.md
+++ b/docs/dev/types_serialization.md
@@ -62,7 +62,7 @@ The most frequent issues where users need to interact with 
Flink's data type han
   by itself. Not all types are seamlessly handled by Kryo (and thus by Flink). 
For example, many Google Guava collection types do not work well
   by default. The solution is to register additional serializers for the types 
that cause problems.
   Call `.getConfig().addDefaultKryoSerializer(clazz, serializer)` on the 
`StreamExecutionEnvironment` or `ExecutionEnvironment`.
-  Additional Kryo serializers are available in many libraries. See [Custom 
Serializers]({{ site.baseurl }}/dev/custom_serializers) for more details on 
working with custom serializers.
+  Additional Kryo serializers are available in many libraries. See [Custom 
Serializers]({{ site.baseurl }}/dev/custom_serializers.html) for more details 
on working with custom serializers.
 
 * **Adding Type Hints:** Sometimes, when Flink cannot infer the generic types 
despits all tricks, a user must pass a *type hint*. That is generally
   only necessary in the Java API. The [Type Hints 
Section](#type-hints-in-the-java-api) describes that in more detail.

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/examples/index.md
----------------------------------------------------------------------
diff --git a/docs/examples/index.md b/docs/examples/index.md
index d04a1e9..ac47b10 100644
--- a/docs/examples/index.md
+++ b/docs/examples/index.md
@@ -25,9 +25,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Sample Project in Java]({{ site.baseurl }}/quickstart/java_api_quickstart) 
and [Sample Project in Scala]({{ site.baseurl 
}}/quickstart/scala_api_quickstart) are guides to setting up Maven and SBT 
projects and include simple implementations of a word count application.
+[Sample Project in Java]({{ site.baseurl 
}}/quickstart/java_api_quickstart.html) and [Sample Project in Scala]({{ 
site.baseurl }}/quickstart/scala_api_quickstart.html) are guides to setting up 
Maven and SBT projects and include simple implementations of a word count 
application.
 
-[Monitoring Wikipedia Edits]({{ site.baseurl 
}}/quickstart/run_example_quickstart) is a more complete example of a streaming 
analytics application.
+[Monitoring Wikipedia Edits]({{ site.baseurl 
}}/quickstart/run_example_quickstart.html) is a more complete example of a 
streaming analytics application.
 
 [Building real-time dashboard applications with Apache Flink, Elasticsearch, 
and 
Kibana](https://www.elastic.co/blog/building-real-time-dashboard-applications-with-apache-flink-elasticsearch-and-kibana)
 is a blog post at elastic.co showing how to build a real-time dashboard 
solution for streaming data analytics using Apache Flink, Elasticsearch, and 
Kibana.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/quickstart/run_example_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/run_example_quickstart.md 
b/docs/quickstart/run_example_quickstart.md
index d68a8a5..123e265 100644
--- a/docs/quickstart/run_example_quickstart.md
+++ b/docs/quickstart/run_example_quickstart.md
@@ -280,8 +280,8 @@ was produced.
 
 This should get you started with writing your own Flink programs. To learn 
more 
 you can check out our guides
-about [basic concepts]({{ site.baseurl }}/dev/api_concepts) and the
-[DataStream API]({{ site.baseurl }}/dev/datastream_api). Stick
+about [basic concepts]({{ site.baseurl }}/dev/api_concepts.html) and the
+[DataStream API]({{ site.baseurl }}/dev/datastream_api.html). Stick
 around for the bonus exercise if you want to learn about setting up a Flink 
cluster on
 your own machine and writing results to [Kafka](http://kafka.apache.org).
 

http://git-wip-us.apache.org/repos/asf/flink/blob/e9ada34f/docs/setup/config.md
----------------------------------------------------------------------
diff --git a/docs/setup/config.md b/docs/setup/config.md
index c720b50..5b00086 100644
--- a/docs/setup/config.md
+++ b/docs/setup/config.md
@@ -59,7 +59,7 @@ The configuration files for the TaskManagers can be 
different, Flink does not as
 - `taskmanager.numberOfTaskSlots`: The number of parallel operator or user 
function instances that a single TaskManager can run (DEFAULT: 1). If this 
value is larger than 1, a single TaskManager takes multiple instances of a 
function or operator. That way, the TaskManager can utilize multiple CPU cores, 
but at the same time, the available memory is divided between the different 
operator or function instances. This value is typically proportional to the 
number of physical CPU cores that the TaskManager's machine has (e.g., equal to 
the number of cores, or half the number of cores). [More about task 
slots](config.html#configuring-taskmanager-processing-slots).
 
 - `parallelism.default`: The default parallelism to use for programs that have 
no parallelism specified. (DEFAULT: 1). For setups that have no concurrent jobs 
running, setting this value to NumTaskManagers * NumSlotsPerTaskManager will 
cause the system to use all available execution resources for the program's 
execution. **Note**: The default parallelism can be overwriten for an entire 
job by calling `setParallelism(int parallelism)` on the `ExecutionEnvironment` 
or by passing `-p <parallelism>` to the Flink Command-line frontend. It can be 
overwritten for single transformations by calling `setParallelism(int
-parallelism)` on an operator. See [Parallel 
Execution]({{site.baseurl}}/dev/parallel) for more information about 
parallelism.
+parallelism)` on an operator. See [Parallel 
Execution]({{site.baseurl}}/dev/parallel.html) for more information about 
parallelism.
 
 - `fs.default-scheme`: The default filesystem scheme to be used, with the 
necessary authority to contact, e.g. the host:port of the NameNode in the case 
of HDFS (if needed).
 By default, this is set to `file:///` which points to the local filesystem. 
This means that the local
@@ -608,6 +608,6 @@ Flink executes a program in parallel by splitting it into 
subtasks and schedulin
 
 Each Flink TaskManager provides processing slots in the cluster. The number of 
slots is typically proportional to the number of available CPU cores __of 
each__ TaskManager. As a general recommendation, the number of available CPU 
cores is a good default for `taskmanager.numberOfTaskSlots`.
 
-When starting a Flink application, users can supply the default number of 
slots to use for that job. The command line value therefore is called `-p` (for 
parallelism). In addition, it is possible to [set the number of slots in the 
programming APIs]({{site.baseurl}}/dev/parallel) for the whole application and 
for individual operators.
+When starting a Flink application, users can supply the default number of 
slots to use for that job. The command line value therefore is called `-p` (for 
parallelism). In addition, it is possible to [set the number of slots in the 
programming APIs]({{site.baseurl}}/dev/parallel.html) for the whole application 
and for individual operators.
 
 <img src="{{ site.baseurl }}/fig/slots_parallelism.svg" class="img-responsive" 
/>

Reply via email to