http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2015/08/24/introducing-flink-gelly.html
----------------------------------------------------------------------
diff --git a/content/news/2015/08/24/introducing-flink-gelly.html 
b/content/news/2015/08/24/introducing-flink-gelly.html
index 355ac32..f8a4fd7 100644
--- a/content/news/2015/08/24/introducing-flink-gelly.html
+++ b/content/news/2015/08/24/introducing-flink-gelly.html
@@ -226,21 +226,21 @@ and mutations as well as neighborhood aggregations.</p>
 
 <h4 id="common-graph-metrics">Common Graph Metrics</h4>
 <p>These methods can be used to retrieve several graph metrics and properties, 
such as the number
-of vertices, edges and the node degrees. </p>
+of vertices, edges and the node degrees.</p>
 
 <h4 id="transformations">Transformations</h4>
 <p>The transformation methods enable several Graph operations, using 
high-level functions similar to
 the ones provided by the batch processing API. These transformations can be 
applied one after the
-other, yielding a new Graph after each step, in a fashion similar to operators 
on DataSets: </p>
+other, yielding a new Graph after each step, in a fashion similar to operators 
on DataSets:</p>
 
-<div class="highlight"><pre><code class="language-java"><span 
class="n">inputGraph</span><span class="o">.</span><span 
class="na">getUndirected</span><span class="o">().</span><span 
class="na">mapEdges</span><span class="o">(</span><span class="k">new</span> 
<span class="n">CustomEdgeMapper</span><span 
class="o">());</span></code></pre></div>
+<div class="highlight"><pre><code class="language-java"><span 
class="n">inputGraph</span><span class="o">.</span><span 
class="na">getUndirected</span><span class="o">().</span><span 
class="na">mapEdges</span><span class="o">(</span><span class="k">new</span> 
<span class="nf">CustomEdgeMapper</span><span 
class="o">());</span></code></pre></div>
 
 <p>Transformations can be applied on:</p>
 
 <ol>
-  <li><strong>Vertices</strong>: <code>mapVertices</code>, 
<code>joinWithVertices</code>, <code>filterOnVertices</code>, 
<code>addVertex</code>, …  </li>
-  <li><strong>Edges</strong>: <code>mapEdges</code>, 
<code>filterOnEdges</code>, <code>removeEdge</code>, …   </li>
-  <li><strong>Triplets</strong> (source vertex, target vertex, edge): 
<code>getTriplets</code>  </li>
+  <li><strong>Vertices</strong>: <code>mapVertices</code>, 
<code>joinWithVertices</code>, <code>filterOnVertices</code>, 
<code>addVertex</code>, …</li>
+  <li><strong>Edges</strong>: <code>mapEdges</code>, 
<code>filterOnEdges</code>, <code>removeEdge</code>, …</li>
+  <li><strong>Triplets</strong> (source vertex, target vertex, edge): 
<code>getTriplets</code></li>
 </ol>
 
 <h4 id="neighborhood-aggregations">Neighborhood Aggregations</h4>
@@ -269,7 +269,7 @@ one or more values per vertex, the more general  
<code>groupReduceOnEdges()</cod
 <p>Assume you would want to compute the sum of the values of all incoming 
neighbors for each vertex.
 We will call the <code>reduceOnNeighbors()</code> aggregation method since the 
sum is an associative and commutative operation and the neighbors’ values are 
needed:</p>
 
-<div class="highlight"><pre><code class="language-java"><span 
class="n">graph</span><span class="o">.</span><span 
class="na">reduceOnNeighbors</span><span class="o">(</span><span 
class="k">new</span> <span class="n">SumValues</span><span class="o">(),</span> 
<span class="n">EdgeDirection</span><span class="o">.</span><span 
class="na">IN</span><span class="o">);</span></code></pre></div>
+<div class="highlight"><pre><code class="language-java"><span 
class="n">graph</span><span class="o">.</span><span 
class="na">reduceOnNeighbors</span><span class="o">(</span><span 
class="k">new</span> <span class="nf">SumValues</span><span 
class="o">(),</span> <span class="n">EdgeDirection</span><span 
class="o">.</span><span class="na">IN</span><span 
class="o">);</span></code></pre></div>
 
 <p>The vertex with id 1 is the only node that has no incoming edges. The 
result is therefore:</p>
 
@@ -374,7 +374,7 @@ vertex values do not need to be recomputed during an 
iteration.</p>
 <p>Let us reconsider the Single Source Shortest Paths algorithm. In each 
iteration, a vertex:</p>
 
 <ol>
-  <li><strong>Gather</strong> retrieves distances from its neighbors summed up 
with the corresponding edge values; </li>
+  <li><strong>Gather</strong> retrieves distances from its neighbors summed up 
with the corresponding edge values;</li>
   <li><strong>Sum</strong> compares the newly obtained distances in order to 
extract the minimum;</li>
   <li><strong>Apply</strong> and finally adopts the minimum distance computed 
in the sum step,
 provided that it is lower than its current value. If a vertex’s value does 
not change during
@@ -433,7 +433,7 @@ plays that each song has. We then filter out the list of 
songs the users do not
 playlist. Then we compute the top songs per user (i.e. the songs a user 
listened to the most).
 Finally, as a separate use-case on the same data set, we create a user-user 
similarity graph based
 on the common songs and use this resulting graph to detect communities by 
calling Gelly’s Label Propagation
-library method. </p>
+library method.</p>
 
 <p>For running the example implementation, please use the 0.10-SNAPSHOT 
version of Flink as a
 dependency. The full example code base can be found <a 
href="https://github.com/apache/flink/blob/master/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example/MusicProfiles.java";>here</a>.
 The public data set used for testing
@@ -455,7 +455,7 @@ playlist, we use a coGroup function to filter out the 
mismatches.</p>
 <span class="c1">// read the mismatches dataset and extract the songIDs</span>
 <span class="n">DataSet</span><span class="o">&lt;</span><span 
class="n">Tuple3</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span> <span 
class="n">String</span><span class="o">,</span> <span 
class="n">Integer</span><span class="o">&gt;&gt;</span> <span 
class="n">validTriplets</span> <span class="o">=</span> <span 
class="n">triplets</span>
         <span class="o">.</span><span class="na">coGroup</span><span 
class="o">(</span><span class="n">mismatches</span><span 
class="o">).</span><span class="na">where</span><span class="o">(</span><span 
class="mi">1</span><span class="o">).</span><span 
class="na">equalTo</span><span class="o">(</span><span class="mi">0</span><span 
class="o">)</span>
-        <span class="o">.</span><span class="na">with</span><span 
class="o">(</span><span class="k">new</span> <span 
class="n">CoGroupFunction</span><span class="o">()</span> <span 
class="o">{</span>
+        <span class="o">.</span><span class="na">with</span><span 
class="o">(</span><span class="k">new</span> <span 
class="nf">CoGroupFunction</span><span class="o">()</span> <span 
class="o">{</span>
                 <span class="kt">void</span> <span 
class="nf">coGroup</span><span class="o">(</span><span 
class="n">Iterable</span> <span class="n">triplets</span><span 
class="o">,</span> <span class="n">Iterable</span> <span 
class="n">invalidSongs</span><span class="o">,</span> <span 
class="n">Collector</span> <span class="n">out</span><span class="o">)</span> 
<span class="o">{</span>
                         <span class="k">if</span> <span 
class="o">(!</span><span class="n">invalidSongs</span><span 
class="o">.</span><span class="na">iterator</span><span 
class="o">().</span><span class="na">hasNext</span><span class="o">())</span> 
<span class="o">{</span>
                             <span class="k">for</span> <span 
class="o">(</span><span class="n">Tuple3</span> <span class="n">triplet</span> 
<span class="o">:</span> <span class="n">triplets</span><span 
class="o">)</span> <span class="o">{</span> <span class="c1">// valid 
triplet</span>
@@ -493,7 +493,7 @@ basically iterate through the edge value and collect the 
target (song) of the ma
 
 <div class="highlight"><pre><code class="language-java"><span class="c1">//get 
the top track (most listened to) for each user</span>
 <span class="n">DataSet</span><span class="o">&lt;</span><span 
class="n">Tuple2</span><span class="o">&gt;</span> <span 
class="n">usersWithTopTrack</span> <span class="o">=</span> <span 
class="n">userSongGraph</span>
-        <span class="o">.</span><span 
class="na">groupReduceOnEdges</span><span class="o">(</span><span 
class="k">new</span> <span class="n">GetTopSongPerUser</span><span 
class="o">(),</span> <span class="n">EdgeDirection</span><span 
class="o">.</span><span class="na">OUT</span><span class="o">);</span>
+        <span class="o">.</span><span 
class="na">groupReduceOnEdges</span><span class="o">(</span><span 
class="k">new</span> <span class="nf">GetTopSongPerUser</span><span 
class="o">(),</span> <span class="n">EdgeDirection</span><span 
class="o">.</span><span class="na">OUT</span><span class="o">);</span>
 
 <span class="kd">class</span> <span class="nc">GetTopSongPerUser</span> <span 
class="kd">implements</span> <span 
class="n">EdgesFunctionWithVertexValue</span> <span class="o">{</span>
     <span class="kt">void</span> <span class="nf">iterateEdges</span><span 
class="o">(</span><span class="n">Vertex</span> <span 
class="n">vertex</span><span class="o">,</span> <span 
class="n">Iterable</span><span class="o">&lt;</span><span 
class="n">Edge</span><span class="o">&gt;</span> <span 
class="n">edges</span><span class="o">)</span> <span class="o">{</span>
@@ -506,7 +506,7 @@ basically iterate through the edge value and collect the 
target (song) of the ma
                 <span class="n">topSong</span> <span class="o">=</span> <span 
class="n">edge</span><span class="o">.</span><span 
class="na">getTarget</span><span class="o">();</span>
             <span class="o">}</span>
         <span class="o">}</span>
-        <span class="k">return</span> <span class="k">new</span> <span 
class="n">Tuple2</span><span class="o">(</span><span 
class="n">vertex</span><span class="o">.</span><span 
class="na">getId</span><span class="o">(),</span> <span 
class="n">topSong</span><span class="o">);</span>
+        <span class="k">return</span> <span class="k">new</span> <span 
class="nf">Tuple2</span><span class="o">(</span><span 
class="n">vertex</span><span class="o">.</span><span 
class="na">getId</span><span class="o">(),</span> <span 
class="n">topSong</span><span class="o">);</span>
     <span class="o">}</span>
 <span class="o">}</span></code></pre></div>
 
@@ -523,10 +523,10 @@ in the figure below.</p>
 
 <p>To form the user-user graph in Flink, we will simply take the edges from 
the user-song graph
 (left-hand side of the image), group them by song-id, and then add all the 
users (source vertex ids)
-to an ArrayList. </p>
+to an ArrayList.</p>
 
 <p>We then match users who listened to the same song two by two, creating a 
new edge to mark their
-common interest (right-hand side of the image). </p>
+common interest (right-hand side of the image).</p>
 
 <p>Afterwards, we perform a <code>distinct()</code> operation to avoid 
creation of duplicate data.
 Considering that we now have the DataSet of edges which present interest, 
creating a graph is as
@@ -542,14 +542,14 @@ straightforward as a call to the 
<code>Graph.fromDataSet()</code> method.</p>
                 <span class="o">}</span>
         <span class="o">})</span>
         <span class="o">.</span><span class="na">groupBy</span><span 
class="o">(</span><span class="mi">1</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">reduceGroup</span><span 
class="o">(</span><span class="k">new</span> <span 
class="n">GroupReduceFunction</span><span class="o">()</span> <span 
class="o">{</span>
+        <span class="o">.</span><span class="na">reduceGroup</span><span 
class="o">(</span><span class="k">new</span> <span 
class="nf">GroupReduceFunction</span><span class="o">()</span> <span 
class="o">{</span>
                 <span class="kt">void</span> <span 
class="nf">reduce</span><span class="o">(</span><span 
class="n">Iterable</span><span class="o">&lt;</span><span 
class="n">Edge</span><span class="o">&gt;</span> <span 
class="n">edges</span><span class="o">,</span> <span 
class="n">Collector</span><span class="o">&lt;</span><span 
class="n">Edge</span><span class="o">&gt;</span> <span 
class="n">out</span><span class="o">)</span> <span class="o">{</span>
-                    <span class="n">List</span> <span class="n">users</span> 
<span class="o">=</span> <span class="k">new</span> <span 
class="n">ArrayList</span><span class="o">();</span>
+                    <span class="n">List</span> <span class="n">users</span> 
<span class="o">=</span> <span class="k">new</span> <span 
class="nf">ArrayList</span><span class="o">();</span>
                     <span class="k">for</span> <span class="o">(</span><span 
class="n">Edge</span> <span class="n">edge</span> <span class="o">:</span> 
<span class="n">edges</span><span class="o">)</span> <span class="o">{</span>
                         <span class="n">users</span><span 
class="o">.</span><span class="na">add</span><span class="o">(</span><span 
class="n">edge</span><span class="o">.</span><span 
class="na">getSource</span><span class="o">());</span>
                         <span class="k">for</span> <span 
class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span 
class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span 
class="n">i</span> <span class="o">&lt;</span> <span 
class="n">users</span><span class="o">.</span><span class="na">size</span><span 
class="o">()</span> <span class="o">-</span> <span class="mi">1</span><span 
class="o">;</span> <span class="n">i</span><span class="o">++)</span> <span 
class="o">{</span>
                             <span class="k">for</span> <span 
class="o">(</span><span class="kt">int</span> <span class="n">j</span> <span 
class="o">=</span> <span class="n">i</span><span class="o">+</span><span 
class="mi">1</span><span class="o">;</span> <span class="n">j</span> <span 
class="o">&lt;</span> <span class="n">users</span><span class="o">.</span><span 
class="na">size</span><span class="o">()</span> <span class="o">-</span> <span 
class="mi">1</span><span class="o">;</span> <span class="n">j</span><span 
class="o">++)</span> <span class="o">{</span>
-                                <span class="n">out</span><span 
class="o">.</span><span class="na">collect</span><span class="o">(</span><span 
class="k">new</span> <span class="n">Edge</span><span class="o">(</span><span 
class="n">users</span><span class="o">.</span><span class="na">get</span><span 
class="o">(</span><span class="n">i</span><span class="o">),</span> <span 
class="n">users</span><span class="o">.</span><span class="na">get</span><span 
class="o">(</span><span class="n">j</span><span class="o">)));</span>
+                                <span class="n">out</span><span 
class="o">.</span><span class="na">collect</span><span class="o">(</span><span 
class="k">new</span> <span class="nf">Edge</span><span class="o">(</span><span 
class="n">users</span><span class="o">.</span><span class="na">get</span><span 
class="o">(</span><span class="n">i</span><span class="o">),</span> <span 
class="n">users</span><span class="o">.</span><span class="na">get</span><span 
class="o">(</span><span class="n">j</span><span class="o">)));</span>
                             <span class="o">}</span>
                         <span class="o">}</span>
                     <span class="o">}</span>
@@ -565,7 +565,7 @@ formed. To do so, we first initialize each vertex with a 
numeric label using the
 the id of a vertex with the first element of the tuple, afterwards applying a 
map function.
 Finally, we call the <code>run()</code> method with the LabelPropagation 
library method passed
 as a parameter. In the end, the vertices will be updated to contain the most 
frequent label
-among their neighbors. </p>
+among their neighbors.</p>
 
 <div class="highlight"><pre><code class="language-java"><span class="c1">// 
detect user communities using label propagation</span>
 <span class="c1">// initialize each vertex with a unique numeric label</span>
@@ -580,12 +580,12 @@ among their neighbors. </p>
 
 <span class="c1">// update the vertex values and run the label propagation 
algorithm</span>
 <span class="n">DataSet</span><span class="o">&lt;</span><span 
class="n">Vertex</span><span class="o">&gt;</span> <span 
class="n">verticesWithCommunity</span> <span class="o">=</span> <span 
class="n">similarUsersGraph</span>
-        <span class="o">.</span><span class="na">joinWithVertices</span><span 
class="o">(</span><span class="n">idsWithlLabels</span><span class="o">,</span> 
<span class="k">new</span> <span class="n">MapFunction</span><span 
class="o">()</span> <span class="o">{</span>
+        <span class="o">.</span><span class="na">joinWithVertices</span><span 
class="o">(</span><span class="n">idsWithlLabels</span><span class="o">,</span> 
<span class="k">new</span> <span class="nf">MapFunction</span><span 
class="o">()</span> <span class="o">{</span>
                 <span class="kd">public</span> <span class="n">Long</span> 
<span class="nf">map</span><span class="o">(</span><span 
class="n">Tuple2</span> <span class="n">idWithLabel</span><span 
class="o">)</span> <span class="o">{</span>
                     <span class="k">return</span> <span 
class="n">idWithLabel</span><span class="o">.</span><span 
class="na">f1</span><span class="o">;</span>
                 <span class="o">}</span>
         <span class="o">})</span>
-        <span class="o">.</span><span class="na">run</span><span 
class="o">(</span><span class="k">new</span> <span 
class="n">LabelPropagation</span><span class="o">(</span><span 
class="n">numIterations</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">run</span><span 
class="o">(</span><span class="k">new</span> <span 
class="nf">LabelPropagation</span><span class="o">(</span><span 
class="n">numIterations</span><span class="o">))</span>
         <span class="o">.</span><span class="na">getVertices</span><span 
class="o">();</span></code></pre></div>
 
 <p><a href="#top">Back to top</a></p>
@@ -595,10 +595,10 @@ among their neighbors. </p>
 <p>Currently, Gelly matches the basic functionalities provided by most 
state-of-the-art graph
 processing systems. Our vision is to turn Gelly into more than “yet another 
library for running
 PageRank-like algorithms” by supporting generic iterations, implementing 
graph partitioning,
-providing bipartite graph support and by offering numerous other features. </p>
+providing bipartite graph support and by offering numerous other features.</p>
 
 <p>We are also enriching Flink Gelly with a set of operators suitable for 
highly skewed graphs
-as well as a Graph API built on Flink Streaming. </p>
+as well as a Graph API built on Flink Streaming.</p>
 
 <p>In the near future, we would like to see how Gelly can be integrated with 
graph visualization
 tools, graph database systems and sampling techniques.</p>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2015/09/16/off-heap-memory.html
----------------------------------------------------------------------
diff --git a/content/news/2015/09/16/off-heap-memory.html 
b/content/news/2015/09/16/off-heap-memory.html
index 317e596..40db6ba 100644
--- a/content/news/2015/09/16/off-heap-memory.html
+++ b/content/news/2015/09/16/off-heap-memory.html
@@ -206,7 +206,7 @@
 
 <h2 id="the-off-heap-memory-implementation">The off-heap Memory 
Implementation</h2>
 
-<p>Given that all memory intensive internal algorithms are already implemented 
against the <code>MemorySegment</code>, our implementation to switch to 
off-heap memory is actually trivial. You can compare it to replacing all 
<code>ByteBuffer.allocate(numBytes)</code> calls with 
<code>ByteBuffer.allocateDirect(numBytes)</code>. In Flink’s case it meant 
that we made the <code>MemorySegment</code> abstract and added the 
<code>HeapMemorySegment</code> and <code>OffHeapMemorySegment</code> 
subclasses. The <code>OffHeapMemorySegment</code> takes the off-heap memory 
pointer from a <code>java.nio.DirectByteBuffer</code> and implements its 
specialized access methods using <code>sun.misc.Unsafe</code>. We also made a 
few adjustments to the startup scripts and the deployment code to make sure 
that the JVM is permitted enough off-heap memory (direct memory, 
<em>-XX:MaxDirectMemorySize</em>). </p>
+<p>Given that all memory intensive internal algorithms are already implemented 
against the <code>MemorySegment</code>, our implementation to switch to 
off-heap memory is actually trivial. You can compare it to replacing all 
<code>ByteBuffer.allocate(numBytes)</code> calls with 
<code>ByteBuffer.allocateDirect(numBytes)</code>. In Flink’s case it meant 
that we made the <code>MemorySegment</code> abstract and added the 
<code>HeapMemorySegment</code> and <code>OffHeapMemorySegment</code> 
subclasses. The <code>OffHeapMemorySegment</code> takes the off-heap memory 
pointer from a <code>java.nio.DirectByteBuffer</code> and implements its 
specialized access methods using <code>sun.misc.Unsafe</code>. We also made a 
few adjustments to the startup scripts and the deployment code to make sure 
that the JVM is permitted enough off-heap memory (direct memory, 
<em>-XX:MaxDirectMemorySize</em>).</p>
 
 <p>In practice we had to go one step further, to make the implementation 
perform well. While the <code>ByteBuffer</code> is used in I/O code paths to 
compose headers and move bulk memory into place, the MemorySegment is part of 
the innermost loops of many algorithms (sorting, hash tables, …). That means 
that the access methods have to be as fast as possible.</p>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2015/11/16/release-0.10.0.html
----------------------------------------------------------------------
diff --git a/content/news/2015/11/16/release-0.10.0.html 
b/content/news/2015/11/16/release-0.10.0.html
index 163f98e..d3627b9 100644
--- a/content/news/2015/11/16/release-0.10.0.html
+++ b/content/news/2015/11/16/release-0.10.0.html
@@ -162,7 +162,7 @@
 
 <p>The Apache Flink community is pleased to announce the availability of the 
0.10.0 release. The community put significant effort into improving and 
extending Apache Flink since the last release, focusing on data stream 
processing and operational features. About 80 contributors provided bug fixes, 
improvements, and new features such that in total more than 400 JIRA issues 
could be resolved.</p>
 
-<p>For Flink 0.10.0, the focus of the community was to graduate the DataStream 
API from beta and to evolve Apache Flink into a production-ready stream data 
processor with a competitive feature set. These efforts resulted in support for 
event-time and out-of-order streams, exactly-once guarantees in the case of 
failures, a very flexible windowing mechanism, sophisticated operator state 
management, and a highly-available cluster operation mode. Flink 0.10.0 also 
brings a new monitoring dashboard with real-time system and job monitoring 
capabilities. Both batch and streaming modes of Flink benefit from the new high 
availability and improved monitoring features. Needless to say that Flink 
0.10.0 includes many more features, improvements, and bug fixes. </p>
+<p>For Flink 0.10.0, the focus of the community was to graduate the DataStream 
API from beta and to evolve Apache Flink into a production-ready stream data 
processor with a competitive feature set. These efforts resulted in support for 
event-time and out-of-order streams, exactly-once guarantees in the case of 
failures, a very flexible windowing mechanism, sophisticated operator state 
management, and a highly-available cluster operation mode. Flink 0.10.0 also 
brings a new monitoring dashboard with real-time system and job monitoring 
capabilities. Both batch and streaming modes of Flink benefit from the new high 
availability and improved monitoring features. Needless to say that Flink 
0.10.0 includes many more features, improvements, and bug fixes.</p>
 
 <p>We encourage everyone to <a href="/downloads.html">download the release</a> 
and <a 
href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/";>check out 
the documentation</a>. Feedback through the Flink <a 
href="/community.html#mailing-lists">mailing lists</a> is, as always, very 
welcome!</p>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2015/12/04/Introducing-windows.html
----------------------------------------------------------------------
diff --git a/content/news/2015/12/04/Introducing-windows.html 
b/content/news/2015/12/04/Introducing-windows.html
index 1f9201c..715c0cb 100644
--- a/content/news/2015/12/04/Introducing-windows.html
+++ b/content/news/2015/12/04/Introducing-windows.html
@@ -160,7 +160,7 @@
       <article>
         <p>04 Dec 2015 by Fabian Hueske (<a 
href="https://twitter.com/fhueske";>@fhueske</a>)</p>
 
-<p>The data analysis space is witnessing an evolution from batch to stream 
processing for many use cases. Although batch can be handled as a special case 
of stream processing, analyzing never-ending streaming data often requires a 
shift in the mindset and comes with its own terminology (for example, 
“windowing” and “at-least-once”/”exactly-once” processing). This 
shift and the new terminology can be quite confusing for people being new to 
the space of stream processing. Apache Flink is a production-ready stream 
processor with an easy-to-use yet very expressive API to define advanced stream 
analysis programs. Flink’s API features very flexible window definitions on 
data streams which let it stand out among other open source stream processors. 
</p>
+<p>The data analysis space is witnessing an evolution from batch to stream 
processing for many use cases. Although batch can be handled as a special case 
of stream processing, analyzing never-ending streaming data often requires a 
shift in the mindset and comes with its own terminology (for example, 
“windowing” and “at-least-once”/”exactly-once” processing). This 
shift and the new terminology can be quite confusing for people being new to 
the space of stream processing. Apache Flink is a production-ready stream 
processor with an easy-to-use yet very expressive API to define advanced stream 
analysis programs. Flink’s API features very flexible window definitions on 
data streams which let it stand out among other open source stream 
processors.</p>
 
 <p>In this blog post, we discuss the concept of windows for stream processing, 
present Flink’s built-in windows, and explain its support for custom 
windowing semantics.</p>
 
@@ -223,17 +223,17 @@
 
 <p>There is one aspect that we haven’t discussed yet, namely the exact 
meaning of “<em>collects elements for one minute</em>” which boils down to 
the question, “<em>How does the stream processor interpret time?</em>”.</p>
 
-<p>Apache Flink features three different notions of time, namely 
<em>processing time</em>, <em>event time</em>, and <em>ingestion time</em>. </p>
+<p>Apache Flink features three different notions of time, namely 
<em>processing time</em>, <em>event time</em>, and <em>ingestion time</em>.</p>
 
 <ol>
-  <li>In <strong>processing time</strong>, windows are defined with respect to 
the wall clock of the machine that builds and processes a window, i.e., a one 
minute processing time window collects elements for exactly one minute. </li>
-  <li>In <strong>event time</strong>, windows are defined with respect to 
timestamps that are attached to each event record. This is common for many 
types of events, such as log entries, sensor data, etc, where the timestamp 
usually represents the time at which the event occurred. Event time has several 
benefits over processing time. First of all, it decouples the program semantics 
from the actual serving speed of the source and the processing performance of 
system. Hence you can process historic data, which is served at maximum speed, 
and continuously produced data with the same program. It also prevents 
semantically incorrect results in case of backpressure or delays due to failure 
recovery. Second, event time windows compute correct results, even if events 
arrive out-of-order of their timestamp which is common if a data stream gathers 
events from distributed sources. </li>
+  <li>In <strong>processing time</strong>, windows are defined with respect to 
the wall clock of the machine that builds and processes a window, i.e., a one 
minute processing time window collects elements for exactly one minute.</li>
+  <li>In <strong>event time</strong>, windows are defined with respect to 
timestamps that are attached to each event record. This is common for many 
types of events, such as log entries, sensor data, etc, where the timestamp 
usually represents the time at which the event occurred. Event time has several 
benefits over processing time. First of all, it decouples the program semantics 
from the actual serving speed of the source and the processing performance of 
system. Hence you can process historic data, which is served at maximum speed, 
and continuously produced data with the same program. It also prevents 
semantically incorrect results in case of backpressure or delays due to failure 
recovery. Second, event time windows compute correct results, even if events 
arrive out-of-order of their timestamp which is common if a data stream gathers 
events from distributed sources.</li>
   <li><strong>Ingestion time</strong> is a hybrid of processing and event 
time. It assigns wall clock timestamps to records as soon as they arrive in the 
system (at the source) and continues processing with event time semantics based 
on the attached timestamps.</li>
 </ol>
 
 <h2 id="count-windows">Count Windows</h2>
 
-<p>Apache Flink also features count windows. A tumbling count window of 100 
will collect 100 events in a window and evaluate the window when the 100th 
element has been added. </p>
+<p>Apache Flink also features count windows. A tumbling count window of 100 
will collect 100 events in a window and evaluate the window when the 100th 
element has been added.</p>
 
 <p>In Flink’s DataStream API, tumbling and sliding count windows are defined 
as follows:</p>
 
@@ -256,7 +256,7 @@
 
 <h2 id="dissecting-flinks-windowing-mechanics">Dissecting Flink’s windowing 
mechanics</h2>
 
-<p>Flink’s built-in time and count windows cover a wide range of common 
window use cases. However, there are of course applications that require custom 
windowing logic that cannot be addressed by Flink’s built-in windows. In 
order to support also applications that need very specific windowing semantics, 
the DataStream API exposes interfaces for the internals of its windowing 
mechanics. These interfaces give very fine-grained control about the way that 
windows are built and evaluated. </p>
+<p>Flink’s built-in time and count windows cover a wide range of common 
window use cases. However, there are of course applications that require custom 
windowing logic that cannot be addressed by Flink’s built-in windows. In 
order to support also applications that need very specific windowing semantics, 
the DataStream API exposes interfaces for the internals of its windowing 
mechanics. These interfaces give very fine-grained control about the way that 
windows are built and evaluated.</p>
 
 <p>The following figure depicts Flink’s windowing mechanism and introduces 
the components being involved.</p>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2015/12/11/storm-compatibility.html
----------------------------------------------------------------------
diff --git a/content/news/2015/12/11/storm-compatibility.html 
b/content/news/2015/12/11/storm-compatibility.html
index 95b4303..c2a139f 100644
--- a/content/news/2015/12/11/storm-compatibility.html
+++ b/content/news/2015/12/11/storm-compatibility.html
@@ -199,13 +199,13 @@ For this, you only need to replace the dependency 
<code>storm-core</code> by <co
 First, the program is assembled the Storm way without any code change to 
Spouts, Bolts, or the topology itself.</p>
 
 <div class="highlight"><pre><code class="language-java"><span class="c1">// 
assemble topology, the Storm way</span>
-<span class="n">TopologyBuilder</span> <span class="n">builder</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="n">TopologyBuilder</span><span class="o">();</span>
-<span class="n">builder</span><span class="o">.</span><span 
class="na">setSpout</span><span class="o">(</span><span 
class="s">&quot;source&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="n">StormFileSpout</span><span 
class="o">(</span><span class="n">inputFilePath</span><span class="o">));</span>
-<span class="n">builder</span><span class="o">.</span><span 
class="na">setBolt</span><span class="o">(</span><span 
class="s">&quot;tokenizer&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="n">StormBoltTokenizer</span><span 
class="o">())</span>
+<span class="n">TopologyBuilder</span> <span class="n">builder</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="nf">TopologyBuilder</span><span class="o">();</span>
+<span class="n">builder</span><span class="o">.</span><span 
class="na">setSpout</span><span class="o">(</span><span 
class="s">&quot;source&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="nf">StormFileSpout</span><span 
class="o">(</span><span class="n">inputFilePath</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span 
class="na">setBolt</span><span class="o">(</span><span 
class="s">&quot;tokenizer&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="nf">StormBoltTokenizer</span><span 
class="o">())</span>
        <span class="o">.</span><span class="na">shuffleGrouping</span><span 
class="o">(</span><span class="s">&quot;source&quot;</span><span 
class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span 
class="na">setBolt</span><span class="o">(</span><span 
class="s">&quot;counter&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="n">StormBoltCounter</span><span 
class="o">())</span>
-       <span class="o">.</span><span class="na">fieldsGrouping</span><span 
class="o">(</span><span class="s">&quot;tokenizer&quot;</span><span 
class="o">,</span> <span class="k">new</span> <span 
class="n">Fields</span><span class="o">(</span><span 
class="s">&quot;word&quot;</span><span class="o">));</span>
-<span class="n">builder</span><span class="o">.</span><span 
class="na">setBolt</span><span class="o">(</span><span 
class="s">&quot;sink&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="n">StormBoltFileSink</span><span 
class="o">(</span><span class="n">outputFilePath</span><span class="o">))</span>
+<span class="n">builder</span><span class="o">.</span><span 
class="na">setBolt</span><span class="o">(</span><span 
class="s">&quot;counter&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="nf">StormBoltCounter</span><span 
class="o">())</span>
+       <span class="o">.</span><span class="na">fieldsGrouping</span><span 
class="o">(</span><span class="s">&quot;tokenizer&quot;</span><span 
class="o">,</span> <span class="k">new</span> <span 
class="nf">Fields</span><span class="o">(</span><span 
class="s">&quot;word&quot;</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span 
class="na">setBolt</span><span class="o">(</span><span 
class="s">&quot;sink&quot;</span><span class="o">,</span> <span 
class="k">new</span> <span class="nf">StormBoltFileSink</span><span 
class="o">(</span><span class="n">outputFilePath</span><span class="o">))</span>
        <span class="o">.</span><span class="na">shuffleGrouping</span><span 
class="o">(</span><span class="s">&quot;counter&quot;</span><span 
class="o">);</span></code></pre></div>
 
 <p>In order to execute the topology, we need to translate it to a 
<code>FlinkTopology</code> and submit it to a local or remote Flink cluster, 
very similar to submitting the application to a Storm cluster.<sup><a 
href="#fn1" id="ref1">1</a></sup></p>
@@ -214,7 +214,7 @@ First, the program is assembled the Storm way without any 
code change to Spouts,
 <span class="c1">// replaces: StormTopology topology = 
builder.createTopology();</span>
 <span class="n">FlinkTopology</span> <span class="n">topology</span> <span 
class="o">=</span> <span class="n">FlinkTopology</span><span 
class="o">.</span><span class="na">createTopology</span><span 
class="o">(</span><span class="n">builder</span><span class="o">);</span>
 
-<span class="n">Config</span> <span class="n">conf</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="n">Config</span><span class="o">();</span>
+<span class="n">Config</span> <span class="n">conf</span> <span 
class="o">=</span> <span class="k">new</span> <span 
class="nf">Config</span><span class="o">();</span>
 <span class="k">if</span><span class="o">(</span><span 
class="n">runLocal</span><span class="o">)</span> <span class="o">{</span>
        <span class="c1">// use FlinkLocalCluster instead of LocalCluster</span>
        <span class="n">FlinkLocalCluster</span> <span class="n">cluster</span> 
<span class="o">=</span> <span class="n">FlinkLocalCluster</span><span 
class="o">.</span><span class="na">getLocalCluster</span><span 
class="o">();</span>
@@ -254,14 +254,14 @@ As Storm is type agnostic, it is required to specify the 
output type of embedded
 <span class="c1">// use Spout as source</span>
 <span class="n">DataStream</span><span class="o">&lt;</span><span 
class="n">Tuple1</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">&gt;&gt;</span> <span 
class="n">source</span> <span class="o">=</span> 
   <span class="n">env</span><span class="o">.</span><span 
class="na">addSource</span><span class="o">(</span><span class="c1">// Flink 
provided wrapper including original Spout</span>
-                <span class="k">new</span> <span 
class="n">SpoutWrapper</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">&gt;(</span><span class="k">new</span> 
<span class="n">FileSpout</span><span class="o">(</span><span 
class="n">localFilePath</span><span class="o">)),</span> 
+                <span class="k">new</span> <span 
class="n">SpoutWrapper</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">&gt;(</span><span class="k">new</span> 
<span class="nf">FileSpout</span><span class="o">(</span><span 
class="n">localFilePath</span><span class="o">)),</span> 
                 <span class="c1">// specify output type manually</span>
                 <span class="n">TypeExtractor</span><span 
class="o">.</span><span class="na">getForObject</span><span 
class="o">(</span><span class="k">new</span> <span class="n">Tuple1</span><span 
class="o">&lt;</span><span class="n">String</span><span 
class="o">&gt;(</span><span class="s">&quot;&quot;</span><span 
class="o">)));</span>
 <span class="c1">// FileSpout cannot be parallelized</span>
 <span class="n">DataStream</span><span class="o">&lt;</span><span 
class="n">Tuple1</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">&gt;&gt;</span> <span 
class="n">text</span> <span class="o">=</span> <span 
class="n">source</span><span class="o">.</span><span 
class="na">setParallelism</span><span class="o">(</span><span 
class="mi">1</span><span class="o">);</span>
 
 <span class="c1">// further processing with Flink</span>
-<span class="n">DataStream</span><span class="o">&lt;</span><span 
class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Integer</span><span class="o">&gt;</span> <span 
class="n">tokens</span> <span class="o">=</span> <span 
class="n">text</span><span class="o">.</span><span 
class="na">flatMap</span><span class="o">(</span><span class="k">new</span> 
<span class="n">Tokenizer</span><span class="o">()).</span><span 
class="na">keyBy</span><span class="o">(</span><span class="mi">0</span><span 
class="o">);</span>
+<span class="n">DataStream</span><span class="o">&lt;</span><span 
class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Integer</span><span class="o">&gt;</span> <span 
class="n">tokens</span> <span class="o">=</span> <span 
class="n">text</span><span class="o">.</span><span 
class="na">flatMap</span><span class="o">(</span><span class="k">new</span> 
<span class="nf">Tokenizer</span><span class="o">()).</span><span 
class="na">keyBy</span><span class="o">(</span><span class="mi">0</span><span 
class="o">);</span>
 
 <span class="c1">// use Bolt for counting</span>
 <span class="n">DataStream</span><span class="o">&lt;</span><span 
class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Integer</span><span class="o">&gt;</span> <span 
class="n">counts</span> <span class="o">=</span>
@@ -269,7 +269,7 @@ As Storm is type agnostic, it is required to specify the 
output type of embedded
                    <span class="c1">// specify output type manually</span>
                    <span class="n">TypeExtractor</span><span 
class="o">.</span><span class="na">getForObject</span><span 
class="o">(</span><span class="k">new</span> <span class="n">Tuple2</span><span 
class="o">&lt;</span><span class="n">String</span><span class="o">,</span><span 
class="n">Integer</span><span class="o">&gt;(</span><span 
class="s">&quot;&quot;</span><span class="o">,</span><span 
class="mi">0</span><span class="o">))</span>
                    <span class="c1">// Flink provided wrapper including 
original Bolt</span>
-                   <span class="k">new</span> <span 
class="n">BoltWrapper</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Integer</span><span class="o">&gt;&gt;(</span><span 
class="k">new</span> <span class="n">BoltCounter</span><span 
class="o">()));</span>
+                   <span class="k">new</span> <span 
class="n">BoltWrapper</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Tuple2</span><span class="o">&lt;</span><span 
class="n">String</span><span class="o">,</span><span 
class="n">Integer</span><span class="o">&gt;&gt;(</span><span 
class="k">new</span> <span class="nf">BoltCounter</span><span 
class="o">()));</span>
 
 <span class="c1">// write result to file via Flink sink</span>
 <span class="n">counts</span><span class="o">.</span><span 
class="na">writeAsText</span><span class="o">(</span><span 
class="n">outputPath</span><span class="o">);</span>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2016/03/08/release-1.0.0.html
----------------------------------------------------------------------
diff --git a/content/news/2016/03/08/release-1.0.0.html 
b/content/news/2016/03/08/release-1.0.0.html
index 8e428d8..6393e6e 100644
--- a/content/news/2016/03/08/release-1.0.0.html
+++ b/content/news/2016/03/08/release-1.0.0.html
@@ -160,7 +160,7 @@
       <article>
         <p>08 Mar 2016</p>
 
-<p>The Apache Flink community is pleased to announce the availability of the 
1.0.0 release. The community put significant effort into improving and 
extending Apache Flink since the last release, focusing on improving the 
experience of writing and executing data stream processing pipelines in 
production. </p>
+<p>The Apache Flink community is pleased to announce the availability of the 
1.0.0 release. The community put significant effort into improving and 
extending Apache Flink since the last release, focusing on improving the 
experience of writing and executing data stream processing pipelines in 
production.</p>
 
 <center>
 <img src="/img/blog/flink-1.0.png" style="height:200px;margin:15px" />
@@ -201,7 +201,7 @@ When using this backend, active state in streaming programs 
can grow well beyond
 
 <p>The checkpointing has been extended by a more fine-grained control 
mechanism: In previous versions, new checkpoints were triggered independent of 
the speed at which old checkpoints completed. This can lead to situations where 
new checkpoints are piling up, because they are triggered too frequently.</p>
 
-<p>The checkpoint coordinator now exposes statistics through our REST 
monitoring API and the web interface. Users can review the checkpoint size and 
duration on a per-operator basis and see the last completed checkpoints. This 
is helpful for identifying performance issues, such as processing slowdown by 
the checkpoints. </p>
+<p>The checkpoint coordinator now exposes statistics through our REST 
monitoring API and the web interface. Users can review the checkpoint size and 
duration on a per-operator basis and see the last completed checkpoints. This 
is helpful for identifying performance issues, such as processing slowdown by 
the checkpoints.</p>
 
 <h2 id="improved-kafka-connector-and-support-for-kafka-09">Improved Kafka 
connector and support for Kafka 0.9</h2>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2016/04/06/cep-monitoring.html
----------------------------------------------------------------------
diff --git a/content/news/2016/04/06/cep-monitoring.html 
b/content/news/2016/04/06/cep-monitoring.html
index f545a47..1be1758 100644
--- a/content/news/2016/04/06/cep-monitoring.html
+++ b/content/news/2016/04/06/cep-monitoring.html
@@ -280,7 +280,7 @@ Our pattern select function generates for each matching 
pattern a <code>Temperat
         <span class="n">TemperatureEvent</span> <span class="n">first</span> 
<span class="o">=</span> <span class="o">(</span><span 
class="n">TemperatureEvent</span><span class="o">)</span> <span 
class="n">pattern</span><span class="o">.</span><span 
class="na">get</span><span class="o">(</span><span class="s">&quot;First 
Event&quot;</span><span class="o">);</span>
         <span class="n">TemperatureEvent</span> <span class="n">second</span> 
<span class="o">=</span> <span class="o">(</span><span 
class="n">TemperatureEvent</span><span class="o">)</span> <span 
class="n">pattern</span><span class="o">.</span><span 
class="na">get</span><span class="o">(</span><span class="s">&quot;Second 
Event&quot;</span><span class="o">);</span>
 
-        <span class="k">return</span> <span class="k">new</span> <span 
class="n">TemperatureWarning</span><span class="o">(</span>
+        <span class="k">return</span> <span class="k">new</span> <span 
class="nf">TemperatureWarning</span><span class="o">(</span>
             <span class="n">first</span><span class="o">.</span><span 
class="na">getRackID</span><span class="o">(),</span> 
             <span class="o">(</span><span class="n">first</span><span 
class="o">.</span><span class="na">getTemperature</span><span 
class="o">()</span> <span class="o">+</span> <span class="n">second</span><span 
class="o">.</span><span class="na">getTemperature</span><span 
class="o">())</span> <span class="o">/</span> <span class="mi">2</span><span 
class="o">);</span>
     <span class="o">}</span>
@@ -322,7 +322,7 @@ Thus, we will only generate a <code>TemperatureAlert</code> 
if and only if the t
         <span class="n">TemperatureWarning</span> <span 
class="n">second</span> <span class="o">=</span> <span 
class="n">pattern</span><span class="o">.</span><span 
class="na">get</span><span class="o">(</span><span class="s">&quot;Second 
Event&quot;</span><span class="o">);</span>
 
         <span class="k">if</span> <span class="o">(</span><span 
class="n">first</span><span class="o">.</span><span 
class="na">getAverageTemperature</span><span class="o">()</span> <span 
class="o">&lt;</span> <span class="n">second</span><span 
class="o">.</span><span class="na">getAverageTemperature</span><span 
class="o">())</span> <span class="o">{</span>
-            <span class="n">out</span><span class="o">.</span><span 
class="na">collect</span><span class="o">(</span><span class="k">new</span> 
<span class="n">TemperatureAlert</span><span class="o">(</span><span 
class="n">first</span><span class="o">.</span><span 
class="na">getRackID</span><span class="o">()));</span>
+            <span class="n">out</span><span class="o">.</span><span 
class="na">collect</span><span class="o">(</span><span class="k">new</span> 
<span class="nf">TemperatureAlert</span><span class="o">(</span><span 
class="n">first</span><span class="o">.</span><span 
class="na">getRackID</span><span class="o">()));</span>
         <span class="o">}</span>
     <span class="o">});</span></code></pre></div>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/news/2016/04/14/flink-forward-announce.html
----------------------------------------------------------------------
diff --git a/content/news/2016/04/14/flink-forward-announce.html 
b/content/news/2016/04/14/flink-forward-announce.html
new file mode 100644
index 0000000..2a1d7b9
--- /dev/null
+++ b/content/news/2016/04/14/flink-forward-announce.html
@@ -0,0 +1,213 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <!-- The above 3 meta tags *must* come first in the head; any other head 
content must come *after* these tags -->
+    <title>Apache Flink: Flink Forward 2016 Call for Submissions Is Now 
Open</title>
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <!-- Bootstrap -->
+    <link rel="stylesheet" 
href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css";>
+    <link rel="stylesheet" href="/css/flink.css">
+    <link rel="stylesheet" href="/css/syntax.css">
+
+    <!-- Blog RSS feed -->
+    <link href="/blog/feed.xml" rel="alternate" type="application/rss+xml" 
title="Apache Flink Blog: RSS feed" />
+
+    <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
+    <!-- We need to load Jquery in the header for custom google analytics 
event tracking-->
+    <script 
src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js";></script>
+
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media 
queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+      <script 
src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js";></script>
+      <script 
src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js";></script>
+    <![endif]-->
+  </head>
+  <body>  
+    
+
+  <!-- Top navbar. -->
+    <nav class="navbar navbar-default navbar-fixed-top">
+      <div class="container">
+        <!-- The logo. -->
+        <div class="navbar-header">
+          <button type="button" class="navbar-toggle collapsed" 
data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <div class="navbar-logo">
+            <a href="/">
+              <img alt="Apache Flink" src="/img/navbar-brand-logo.jpg" 
width="78px" height="40px">
+            </a>
+          </div>
+        </div><!-- /.navbar-header -->
+
+        <!-- The navigation links. -->
+        <div class="collapse navbar-collapse" 
id="bs-example-navbar-collapse-1">
+          <ul class="nav navbar-nav">
+
+            <!-- Overview -->
+            <li><a href="/index.html">Overview</a></li>
+
+            <!-- Features -->
+            <li><a href="/features.html">Features</a></li>
+
+            <!-- Downloads -->
+            <li><a href="/downloads.html">Downloads</a></li>
+
+            <!-- FAQ -->
+            <li><a href="/faq.html">FAQ</a></li>
+
+
+            <!-- Quickstart -->
+            <li class="dropdown">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" 
role="button" aria-expanded="false"><small><span class="glyphicon 
glyphicon-new-window"></span></small> Quickstart <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html";>Setup</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/run_example_quickstart.html";>Example:
 Wikipedia Edit Stream</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/java_api_quickstart.html";>Java
 API</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/scala_api_quickstart.html";>Scala
 API</a></li>
+              </ul>
+            </li>
+
+            <!-- Documentation -->
+            <li class="dropdown">
+              <a href="" class="dropdown-toggle" data-toggle="dropdown" 
role="button" aria-expanded="false"><small><span class="glyphicon 
glyphicon-new-window"></span></small> Documentation <span 
class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Latest stable release -->
+                <li role="presentation" class="dropdown-header"><strong>Latest 
Release</strong> (Stable)</li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0";>1.0 
Documentation</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0/api/java"; 
class="active">1.0 Javadocs</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-release-1.0/api/scala/index.html";
 class="active">1.0 ScalaDocs</a></li>
+
+                <!-- Snapshot docs -->
+                <li class="divider"></li>
+                <li role="presentation" 
class="dropdown-header"><strong>Snapshot</strong> (Development)</li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-master";>1.1 
Documentation</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-master/api/java"; 
class="active">1.1 Javadocs</a></li>
+                <li><a 
href="http://ci.apache.org/projects/flink/flink-docs-master/api/scala/index.html";
 class="active">1.1 ScalaDocs</a></li>
+
+                <!-- Wiki -->
+                <li class="divider"></li>
+                <li><a href="/visualizer/"><small><span class="glyphicon 
glyphicon-new-window"></span></small> Plan Visualizer</a></li>
+                <li><a 
href="https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home";><small><span
 class="glyphicon glyphicon-new-window"></span></small> Wiki</a></li>
+              </ul>
+            </li>
+
+          </ul>
+
+          <ul class="nav navbar-nav navbar-right">
+            <!-- Blog -->
+            <li class=" active hidden-md hidden-sm"><a 
href="/blog/">Blog</a></li>
+
+            <li class="dropdown hidden-md hidden-sm">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" 
role="button" aria-expanded="false">Community <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Community -->
+                <li role="presentation" 
class="dropdown-header"><strong>Community</strong></li>
+                <li><a href="/community.html#mailing-lists">Mailing 
Lists</a></li>
+                <li><a href="/community.html#irc">IRC</a></li>
+                <li><a href="/community.html#stack-overflow">Stack 
Overflow</a></li>
+                <li><a href="/community.html#issue-tracker">Issue 
Tracker</a></li>
+                <li><a href="/community.html#third-party-packages">Third Party 
Packages</a></li>
+                <li><a href="/community.html#source-code">Source Code</a></li>
+                <li><a href="/community.html#people">People</a></li>
+                <li><a 
href="https://cwiki.apache.org/confluence/display/FLINK/Powered+by+Flink";><small><span
 class="glyphicon glyphicon-new-window"></span></small> Powered by 
Flink</a></li>
+
+                <!-- Contribute -->
+                <li class="divider"></li>
+                <li role="presentation" 
class="dropdown-header"><strong>Contribute</strong></li>
+                <li><a href="/how-to-contribute.html">How to 
Contribute</a></li>
+                <li><a href="/contribute-code.html">Contribute Code</a></li>
+                <li><a href="/contribute-documentation.html">Contribute 
Documentation</a></li>
+                <li><a href="/improve-website.html">Improve the 
Website</a></li>
+              </ul>
+            </li>
+
+            <li class="dropdown hidden-md hidden-sm">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" 
role="button" aria-expanded="false">Project <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Project -->
+                <li role="presentation" 
class="dropdown-header"><strong>Project</strong></li>
+                <li><a href="/slides.html">Slides</a></li>
+                <li><a href="/material.html">Material</a></li>
+                <li><a href="https://twitter.com/apacheflink";><small><span 
class="glyphicon glyphicon-new-window"></span></small> Twitter</a></li>
+                <li><a href="https://github.com/apache/flink";><small><span 
class="glyphicon glyphicon-new-window"></span></small> GitHub</a></li>
+                <li><a 
href="https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home";><small><span
 class="glyphicon glyphicon-new-window"></span></small> Wiki</a></li>
+              </ul>
+            </li>
+          </ul>
+        </div><!-- /.navbar-collapse -->
+      </div><!-- /.container -->
+    </nav>
+
+
+    <!-- Main content. -->
+    <div class="container">
+      
+
+<div class="row">
+  <div class="col-sm-8 col-sm-offset-2">
+    <div class="row">
+      <h1>Flink Forward 2016 Call for Submissions Is Now Open</h1>
+
+      <article>
+        <p>14 Apr 2016 by Aljoscha Krettek (<a 
href="https://twitter.com/aljoscha";>@aljoscha</a>)</p>
+
+<p>We are happy to announce that the call for submissions for Flink Forward 
2016 is now open! The conference will take place September 12-14, 2016 in 
Berlin, Germany, bringing together the open source stream processing community. 
Most Apache Flink committers will attend the conference, making it the ideal 
venue to learn more about the project and its roadmap and connect with the 
community.</p>
+
+<p>The conference welcomes submissions on everything Flink-related, including 
experiences with using Flink, products based on Flink, technical talks on 
extending Flink, as well as connecting Flink with other open source or 
proprietary software.</p>
+
+<p>Read more <a href="http://flink-forward.org/";>here</a>.</p>
+
+      </article>
+    </div>
+
+    <div class="row">
+      <div id="disqus_thread"></div>
+      <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE 
* * */
+        var disqus_shortname = 'stratosphere-eu'; // required: replace example 
with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 
'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+             (document.getElementsByTagName('head')[0] || 
document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+      </script>
+    </div>
+  </div>
+</div>
+
+      <hr />
+      <div class="footer text-center">
+        <p>Copyright © 2014-2015 <a href="http://apache.org";>The Apache 
Software Foundation</a>. All Rights Reserved.</p>
+        <p>Apache Flink, Apache, and the Apache feather logo are trademarks of 
The Apache Software Foundation.</p>
+        <p><a href="/privacy-policy.html">Privacy Policy</a> &middot; <a 
href="/blog/feed.xml">RSS feed</a></p>
+      </div>
+
+    </div><!-- /.container -->
+
+    <!-- Include all compiled plugins (below), or include individual files as 
needed -->
+    <script 
src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js";></script>
+    <script src="/js/codetabs.js"></script>
+
+    <!-- Google Analytics -->
+    <script>
+      
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new 
Date();a=s.createElement(o),
+      
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+      
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+      ga('create', 'UA-52545728-1', 'auto');
+      ga('send', 'pageview');
+    </script>
+  </body>
+</html>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/48be7c6f/content/slides.html
----------------------------------------------------------------------
diff --git a/content/slides.html b/content/slides.html
index 20b4bd2..c6797aa 100644
--- a/content/slides.html
+++ b/content/slides.html
@@ -160,12 +160,12 @@
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a href="#training">Training</a></li>
-  <li><a href="#flink-forward">Flink Forward</a></li>
-  <li><a href="#slides">Slides</a>    <ul>
-      <li><a href="#section">2016</a></li>
-      <li><a href="#section-1">2015</a></li>
-      <li><a href="#section-2">2014</a></li>
+  <li><a href="#training" id="markdown-toc-training">Training</a></li>
+  <li><a href="#flink-forward" id="markdown-toc-flink-forward">Flink 
Forward</a></li>
+  <li><a href="#slides" id="markdown-toc-slides">Slides</a>    <ul>
+      <li><a href="#section" id="markdown-toc-section">2016</a></li>
+      <li><a href="#section-1" id="markdown-toc-section-1">2015</a></li>
+      <li><a href="#section-2" id="markdown-toc-section-2">2014</a></li>
     </ul>
   </li>
 </ul>

Reply via email to