Repository: flink-web Updated Branches: refs/heads/asf-site e418ca0f2 -> 22649e2d1
regenerate website Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/22649e2d Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/22649e2d Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/22649e2d Branch: refs/heads/asf-site Commit: 22649e2d1736da5ffac9992960e2addd4d7433ee Parents: e418ca0 Author: Maximilian Michels <[email protected]> Authored: Wed Jul 1 11:21:42 2015 +0200 Committer: Maximilian Michels <[email protected]> Committed: Wed Jul 1 11:21:42 2015 +0200 ---------------------------------------------------------------------- content/faq.html | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/flink-web/blob/22649e2d/content/faq.html ---------------------------------------------------------------------- diff --git a/content/faq.html b/content/faq.html index 85fa392..e61dc3d 100644 --- a/content/faq.html +++ b/content/faq.html @@ -267,10 +267,10 @@ of the master and the worker where the exception occurred <h3 id="how-do-i-debug-flink-programs">How do I debug Flink programs?</h3> <ul> - <li>When you start a program locally with the <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/local_execution.html">LocalExecutor</a>, + <li>When you start a program locally with the <a href="http://flink.apache.org/docs/master/apis/local_execution.html">LocalExecutor</a>, you can place breakpoints in your functions and debug them like normal Java/Scala programs.</li> - <li>The <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#accumulators--counters">Accumulators</a> are very helpful in + <li>The <a href="http://flink.apache.org/docs/master/apis/programming_guide.html#accumulators--counters">Accumulators</a> are very helpful in tracking the behavior of the parallel execution. They allow you to gather information inside the programâs operations and show them after the program execution.</li> @@ -294,8 +294,8 @@ parallelism has to be 1 and set it accordingly.</p> <p>The parallelism can be set in numerous ways to ensure a fine-grained control over the execution of a Flink program. See -the <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#common-options">Configuration guide</a> for detailed instructions on how to -set the parallelism. Also check out <a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-taskmanager-processing-slots">this figure</a> detailing +the <a href="http://flink.apache.org/docs/master/setup/config.html#common-options">Configuration guide</a> for detailed instructions on how to +set the parallelism. Also check out <a href="http://flink.apache.org/docs/master/setup/config.html#configuring-taskmanager-processing-slots">this figure</a> detailing how the processing slots and parallelism are related to each other.</p> <h2 id="errors">Errors</h2> @@ -330,7 +330,7 @@ This can be achieved by using a context bound:</p> <span class="n">input</span><span class="o">.</span><span class="n">reduceGroup</span><span class="o">(</span> <span class="n">i</span> <span class="k">=></span> <span class="n">i</span><span class="o">.</span><span class="n">toSeq</span> <span class="o">)</span> <span class="o">}</span></code></pre></div> -<p>See <a href="http://ci.apache.org/projects/flink/flink-docs-master/internals/types_serialization.html">Type Extraction and Serialization</a> for +<p>See <a href="http://flink.apache.org/docs/master/internals/types_serialization.html">Type Extraction and Serialization</a> for an in-depth discussion of how Flink handles types.</p> <h3 id="i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this">I get an error message saying that not enough buffers are available. How do I fix this?</h3> @@ -340,7 +340,7 @@ you need to adapt the number of network buffers via the config parameter <code>taskmanager.network.numberOfBuffers</code>. As a rule-of-thumb, the number of buffers should be at least <code>4 * numberOfNodes * numberOfTasksPerNode^2</code>. See -<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html">Configuration Reference</a> for details.</p> +<a href="http://flink.apache.org/docs/master/setup/config.html">Configuration Reference</a> for details.</p> <h3 id="my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause">My job fails early with a java.io.EOFException. What could be the cause?</h3> @@ -362,7 +362,7 @@ breaks.</p> at org.apache.flinkruntime.fs.hdfs.DistributedFileSystem.initialize<span class="o">(</span>DistributedFileSystem.java:276</code></pre></div> <p>Please refer to the <a href="/downloads.html#maven">download page</a> and -the <a href="https://github.com/apache/flink/tree/master/README.md">github</a> +the https://github.com/apache/flink/tree/master/README.md for details on how to set up Flink for different Hadoop and HDFS versions.</p> <h3 id="my-job-fails-with-various-exceptions-from-the-hdfshadoop-code-what-can-i-do">My job fails with various exceptions from the HDFS/Hadoop code. What can I do?</h3> @@ -465,7 +465,7 @@ destage operations to disk, if necessary. By default, the system reserves around 70% of the memory. If you frequently run applications that need more memory in the user-defined functions, you can reduce that value using the configuration entries <code>taskmanager.memory.fraction</code> or <code>taskmanager.memory.size</code>. See the -<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html">Configuration Reference</a> for details. This will leave more memory to JVM heap, +<a href="http://flink.apache.org/docs/master/setup/config.html">Configuration Reference</a> for details. This will leave more memory to JVM heap, but may cause data processing tasks to go to disk more often.</p> </li> </ol> @@ -588,12 +588,12 @@ open source project in the next versions.</p> <h3 id="are-hadoop-like-utilities-such-as-counters-and-the-distributedcache-supported">Are Hadoop-like utilities, such as Counters and the DistributedCache supported?</h3> -<p><a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#accumulators--counters">Flinkâs Accumulators</a> work very similar like +<p><a href="http://flink.apache.org/docs/master/apis/programming_guide.html#accumulators--counters">Flinkâs Accumulators</a> work very similar like [Hadoopâs counters, but are more powerful.</p> -<p>Flink has a <a href="https://github.com/apache/flink/tree/master//flink-core/src/main/java/org/apache/flink/api/common/cache/DistributedCache.java">github</a> that is deeply integrated with the APIs. Please refer to the <a href="https://github.com/apache/flink/tree/master//flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L561">github</a> for details on how to use it.</p> +<p>Flink has a https://github.com/apache/flink/tree/master//flink-core/src/main/java/org/apache/flink/api/common/cache/DistributedCache.java that is deeply integrated with the APIs. Please refer to the https://github.com/apache/flink/tree/master//flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L561 for details on how to use it.</p> -<p>In order to make data sets available on all tasks, we encourage you to use <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#broadcast-variables">Broadcast Variables</a> instead. They are more efficient and easier to use than the distributed cache.</p> +<p>In order to make data sets available on all tasks, we encourage you to use <a href="http://flink.apache.org/docs/master/apis/programming_guide.html#broadcast-variables">Broadcast Variables</a> instead. They are more efficient and easier to use than the distributed cache.</p> </div>
