Build site for fixed typo

Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/e5efd40a
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/e5efd40a
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/e5efd40a

Branch: refs/heads/master
Commit: e5efd40a95c252e5f604024dceae7cb84acc8fc5
Parents: cea9e17
Author: Ufuk Celebi <[email protected]>
Authored: Tue Jul 14 15:40:45 2015 +0200
Committer: Ufuk Celebi <[email protected]>
Committed: Tue Jul 14 15:40:45 2015 +0200

----------------------------------------------------------------------
 content/blog/feed.xml                           | 74 ++++++++++----------
 content/blog/page3/index.html                   |  2 +-
 content/community.html                          | 18 ++---
 content/downloads.html                          |  6 +-
 content/faq.html                                | 54 +++++++-------
 content/features.html                           |  2 +-
 content/how-to-contribute.html                  | 20 +++---
 content/material.html                           | 10 +--
 .../2014/01/13/stratosphere-release-0.4.html    |  6 +-
 .../18/amazon-elastic-mapreduce-cloud-yarn.html |  6 +-
 content/news/2014/11/04/release-0.7.0.html      |  2 +-
 .../news/2014/11/18/hadoop-compatibility.html   |  4 +-
 content/news/2015/01/21/release-0.8.html        |  2 +-
 content/news/2015/02/04/january-in-flink.html   |  2 +-
 content/news/2015/02/09/streaming-example.html  |  4 +-
 .../peeking-into-Apache-Flinks-Engine-Room.html | 18 ++---
 .../05/11/Juggling-with-Bits-and-Bytes.html     | 26 +++----
 .../news/2015/05/14/Community-update-April.html |  4 +-
 18 files changed, 130 insertions(+), 130 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 46caacd..06cc7ea 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -247,7 +247,7 @@
 
 <item>
 <title>April 2015 in the Flink community</title>
-<description>&lt;p&gt;April was an packed month for Apache Flink. &lt;/p&gt;
+<description>&lt;p&gt;April was an packed month for Apache Flink.&lt;/p&gt;
 
 &lt;h2 id=&quot;flink-090-milestone1-release&quot;&gt;Flink 0.9.0-milestone1 
release&lt;/h2&gt;
 
@@ -263,7 +263,7 @@
 
 &lt;h2 id=&quot;flink-on-the-web&quot;&gt;Flink on the web&lt;/h2&gt;
 
-&lt;p&gt;Fabian Hueske gave an &lt;a 
href=&quot;http://www.infoq.com/news/2015/04/hueske-apache-flink?utm_campaign=infoq_content&amp;amp;utm_source=infoq&amp;amp;utm_medium=feed&amp;amp;utm_term=global&quot;&gt;interview
 at InfoQ&lt;/a&gt; on Apache Flink. &lt;/p&gt;
+&lt;p&gt;Fabian Hueske gave an &lt;a 
href=&quot;http://www.infoq.com/news/2015/04/hueske-apache-flink?utm_campaign=infoq_content&amp;amp;utm_source=infoq&amp;amp;utm_medium=feed&amp;amp;utm_term=global&quot;&gt;interview
 at InfoQ&lt;/a&gt; on Apache Flink.&lt;/p&gt;
 
 &lt;h2 id=&quot;upcoming-events&quot;&gt;Upcoming events&lt;/h2&gt;
 
@@ -295,7 +295,7 @@ However, this approach has a few notable drawbacks. First 
of all it is not trivi
 &lt;img src=&quot;/img/blog/memory-mgmt.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
 &lt;/center&gt;
 
-&lt;p&gt;Flink’s style of active memory management and operating on binary 
data has several benefits: &lt;/p&gt;
+&lt;p&gt;Flink’s style of active memory management and operating on binary 
data has several benefits:&lt;/p&gt;
 
 &lt;ol&gt;
   &lt;li&gt;&lt;strong&gt;Memory-safe execution &amp;amp; efficient 
out-of-core algorithms.&lt;/strong&gt; Due to the fixed amount of allocated 
memory segments, it is trivial to monitor remaining memory resources. In case 
of memory shortage, processing operators can efficiently write larger batches 
of memory segments to disk and later them read back. Consequently, 
&lt;code&gt;OutOfMemoryErrors&lt;/code&gt; are effectively prevented.&lt;/li&gt;
@@ -304,13 +304,13 @@ However, this approach has a few notable drawbacks. First 
of all it is not trivi
   &lt;li&gt;&lt;strong&gt;Efficient binary operations &amp;amp; cache 
sensitivity.&lt;/strong&gt; Binary data can be efficiently compared and 
operated on given a suitable binary representation. Furthermore, the binary 
representations can put related values, as well as hash codes, keys, and 
pointers, adjacently into memory. This gives data structures with usually more 
cache efficient access patterns.&lt;/li&gt;
 &lt;/ol&gt;
 
-&lt;p&gt;These properties of active memory management are very desirable in a 
data processing systems for large-scale data analytics but have a significant 
price tag attached. Active memory management and operating on binary data is 
not trivial to implement, i.e., using 
&lt;code&gt;java.util.HashMap&lt;/code&gt; is much easier than implementing a 
spillable hash-table backed by byte arrays and a custom serialization stack. Of 
course Apache Flink is not the only JVM-based data processing system that 
operates on serialized binary data. Projects such as &lt;a 
href=&quot;http://drill.apache.org/&quot;&gt;Apache Drill&lt;/a&gt;, &lt;a 
href=&quot;http://ignite.incubator.apache.org/&quot;&gt;Apache Ignite 
(incubating)&lt;/a&gt; or &lt;a 
href=&quot;http://projectgeode.org/&quot;&gt;Apache Geode 
(incubating)&lt;/a&gt; apply similar techniques and it was recently announced 
that also &lt;a href=&quot;http://spark.apache.org/&quot;&gt;Apache 
Spark&lt;/a&gt; will evolve into this direction with &
 lt;a 
href=&quot;https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html&quot;&gt;Project
 Tungsten&lt;/a&gt;. &lt;/p&gt;
+&lt;p&gt;These properties of active memory management are very desirable in a 
data processing systems for large-scale data analytics but have a significant 
price tag attached. Active memory management and operating on binary data is 
not trivial to implement, i.e., using 
&lt;code&gt;java.util.HashMap&lt;/code&gt; is much easier than implementing a 
spillable hash-table backed by byte arrays and a custom serialization stack. Of 
course Apache Flink is not the only JVM-based data processing system that 
operates on serialized binary data. Projects such as &lt;a 
href=&quot;http://drill.apache.org/&quot;&gt;Apache Drill&lt;/a&gt;, &lt;a 
href=&quot;http://ignite.incubator.apache.org/&quot;&gt;Apache Ignite 
(incubating)&lt;/a&gt; or &lt;a 
href=&quot;http://projectgeode.org/&quot;&gt;Apache Geode 
(incubating)&lt;/a&gt; apply similar techniques and it was recently announced 
that also &lt;a href=&quot;http://spark.apache.org/&quot;&gt;Apache 
Spark&lt;/a&gt; will evolve into this direction with &
 lt;a 
href=&quot;https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html&quot;&gt;Project
 Tungsten&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;In the following we discuss in detail how Flink allocates memory, 
de/serializes objects, and operates on binary data. We will also show some 
performance numbers comparing processing objects on the heap and operating on 
binary data.&lt;/p&gt;
 
 &lt;h2 id=&quot;how-does-flink-allocate-memory&quot;&gt;How does Flink 
allocate memory?&lt;/h2&gt;
 
-&lt;p&gt;A Flink worker, called TaskManager, is composed of several internal 
components such as an actor system for coordination with the Flink master, an 
IOManager that takes care of spilling data to disk and reading it back, and a 
MemoryManager that coordinates memory usage. In the context of this blog post, 
the MemoryManager is of most interest. &lt;/p&gt;
+&lt;p&gt;A Flink worker, called TaskManager, is composed of several internal 
components such as an actor system for coordination with the Flink master, an 
IOManager that takes care of spilling data to disk and reading it back, and a 
MemoryManager that coordinates memory usage. In the context of this blog post, 
the MemoryManager is of most interest.&lt;/p&gt;
 
 &lt;p&gt;The MemoryManager takes care of allocating, accounting, and 
distributing MemorySegments to data processing operators such as sort and join 
operators. A &lt;a 
href=&quot;https://github.com/apache/flink/blob/release-0.9.0-milestone-1/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java&quot;&gt;MemorySegment&lt;/a&gt;
 is Flink’s distribution unit of memory and is backed by a regular Java byte 
array (size is 32 KB by default). A MemorySegment provides very efficient write 
and read access to its backed byte array using Java’s unsafe methods. You can 
think of a MemorySegment as a custom-tailored version of Java’s NIO 
ByteBuffer. In order to operate on multiple MemorySegments like on a larger 
chunk of consecutive memory, Flink uses logical views that implement Java’s 
&lt;code&gt;java.io.DataOutput&lt;/code&gt; and 
&lt;code&gt;java.io.DataInput&lt;/code&gt; interfaces.&lt;/p&gt;
 
@@ -322,7 +322,7 @@ However, this approach has a few notable drawbacks. First 
of all it is not trivi
 
 &lt;h2 id=&quot;how-does-flink-serialize-objects&quot;&gt;How does Flink 
serialize objects?&lt;/h2&gt;
 
-&lt;p&gt;The Java ecosystem offers several libraries to convert objects into a 
binary representation and back. Common alternatives are standard Java 
serialization, &lt;a 
href=&quot;https://github.com/EsotericSoftware/kryo&quot;&gt;Kryo&lt;/a&gt;, 
&lt;a href=&quot;http://avro.apache.org/&quot;&gt;Apache Avro&lt;/a&gt;, &lt;a 
href=&quot;http://thrift.apache.org/&quot;&gt;Apache Thrift&lt;/a&gt;, or 
Google’s &lt;a 
href=&quot;https://github.com/google/protobuf&quot;&gt;Protobuf&lt;/a&gt;. 
Flink includes its own custom serialization framework in order to control the 
binary representation of data. This is important because operating on binary 
data such as comparing or even manipulating binary data requires exact 
knowledge of the serialization layout. Further, configuring the serialization 
layout with respect to operations that are performed on binary data can yield a 
significant performance boost. Flink’s serialization stack also leverages the 
fact, that the type of the objects which 
 are going through de/serialization are exactly known before a program is 
executed. &lt;/p&gt;
+&lt;p&gt;The Java ecosystem offers several libraries to convert objects into a 
binary representation and back. Common alternatives are standard Java 
serialization, &lt;a 
href=&quot;https://github.com/EsotericSoftware/kryo&quot;&gt;Kryo&lt;/a&gt;, 
&lt;a href=&quot;http://avro.apache.org/&quot;&gt;Apache Avro&lt;/a&gt;, &lt;a 
href=&quot;http://thrift.apache.org/&quot;&gt;Apache Thrift&lt;/a&gt;, or 
Google’s &lt;a 
href=&quot;https://github.com/google/protobuf&quot;&gt;Protobuf&lt;/a&gt;. 
Flink includes its own custom serialization framework in order to control the 
binary representation of data. This is important because operating on binary 
data such as comparing or even manipulating binary data requires exact 
knowledge of the serialization layout. Further, configuring the serialization 
layout with respect to operations that are performed on binary data can yield a 
significant performance boost. Flink’s serialization stack also leverages the 
fact, that the type of the objects which 
 are going through de/serialization are exactly known before a program is 
executed.&lt;/p&gt;
 
 &lt;p&gt;Flink programs can process data represented as arbitrary Java or 
Scala objects. Before a program is optimized, the data types at each processing 
step of the program’s data flow need to be identified. For Java programs, 
Flink features a reflection-based type extraction component to analyze the 
return types of user-defined functions. Scala programs are analyzed with help 
of the Scala compiler. Flink represents each data type with a &lt;a 
href=&quot;https://github.com/apache/flink/blob/release-0.9.0-milestone-1/flink-core/src/main/java/org/apache/flink/api/common/typeinfo/TypeInformation.java&quot;&gt;TypeInformation&lt;/a&gt;.
 Flink has TypeInformations for several kinds of data types, 
including:&lt;/p&gt;
 
@@ -332,11 +332,11 @@ However, this approach has a few notable drawbacks. First 
of all it is not trivi
   &lt;li&gt;WritableTypeInfo: Any implementation of Hadoop’s Writable 
interface.&lt;/li&gt;
   &lt;li&gt;TupleTypeInfo: Any Flink tuple (Tuple1 to Tuple25). Flink tuples 
are Java representations for fixed-length tuples with typed fields.&lt;/li&gt;
   &lt;li&gt;CaseClassTypeInfo: Any Scala CaseClass (including Scala 
tuples).&lt;/li&gt;
-  &lt;li&gt;PojoTypeInfo: Any POJO (Java or Scala), i.e., an object with all 
fields either being public or accessible through getters and setter that follow 
the common naming conventions. &lt;/li&gt;
+  &lt;li&gt;PojoTypeInfo: Any POJO (Java or Scala), i.e., an object with all 
fields either being public or accessible through getters and setter that follow 
the common naming conventions.&lt;/li&gt;
   &lt;li&gt;GenericTypeInfo: Any data type that cannot be identified as 
another type.&lt;/li&gt;
 &lt;/ul&gt;
 
-&lt;p&gt;Each TypeInformation provides a serializer for the data type it 
represents. For example, a BasicTypeInfo returns a serializer that writes the 
respective primitive type, the serializer of a WritableTypeInfo delegates 
de/serialization to the write() and readFields() methods of the object 
implementing Hadoop’s Writable interface, and a GenericTypeInfo returns a 
serializer that delegates serialization to Kryo. Object serialization to a 
DataOutput which is backed by Flink MemorySegments goes automatically through 
Java’s efficient unsafe operations. For data types that can be used as keys, 
i.e., compared and hashed, the TypeInformation provides TypeComparators. 
TypeComparators compare and hash objects and can - depending on the concrete 
data type - also efficiently compare binary representations and extract 
fixed-length binary key prefixes. &lt;/p&gt;
+&lt;p&gt;Each TypeInformation provides a serializer for the data type it 
represents. For example, a BasicTypeInfo returns a serializer that writes the 
respective primitive type, the serializer of a WritableTypeInfo delegates 
de/serialization to the write() and readFields() methods of the object 
implementing Hadoop’s Writable interface, and a GenericTypeInfo returns a 
serializer that delegates serialization to Kryo. Object serialization to a 
DataOutput which is backed by Flink MemorySegments goes automatically through 
Java’s efficient unsafe operations. For data types that can be used as keys, 
i.e., compared and hashed, the TypeInformation provides TypeComparators. 
TypeComparators compare and hash objects and can - depending on the concrete 
data type - also efficiently compare binary representations and extract 
fixed-length binary key prefixes.&lt;/p&gt;
 
 &lt;p&gt;Tuple, Pojo, and CaseClass types are composite types, i.e., 
containers for one or more possibly nested data types. As such, their 
serializers and comparators are also composite and delegate the serialization 
and comparison of their member data types to the respective serializers and 
comparators. The following figure illustrates the serialization of a (nested) 
&lt;code&gt;Tuple3&amp;lt;Integer, Double, Person&amp;gt;&lt;/code&gt; object 
where &lt;code&gt;Person&lt;/code&gt; is a POJO and defined as 
follows:&lt;/p&gt;
 
@@ -349,13 +349,13 @@ However, this approach has a few notable drawbacks. First 
of all it is not trivi
 &lt;img src=&quot;/img/blog/data-serialization.png&quot; 
style=&quot;width:80%;margin:15px&quot; /&gt;
 &lt;/center&gt;
 
-&lt;p&gt;Flink’s type system can be easily extended by providing custom 
TypeInformations, Serializers, and Comparators to improve the performance of 
serializing and comparing custom data types. &lt;/p&gt;
+&lt;p&gt;Flink’s type system can be easily extended by providing custom 
TypeInformations, Serializers, and Comparators to improve the performance of 
serializing and comparing custom data types.&lt;/p&gt;
 
 &lt;h2 id=&quot;how-does-flink-operate-on-binary-data&quot;&gt;How does Flink 
operate on binary data?&lt;/h2&gt;
 
 &lt;p&gt;Similar to many other data processing APIs (including SQL), Flink’s 
APIs provide transformations to group, sort, and join data sets. These 
transformations operate on potentially very large data sets. Relational 
database systems feature very efficient algorithms for these purposes since 
several decades including external merge-sort, merge-join, and hybrid 
hash-join. Flink builds on this technology, but generalizes it to handle 
arbitrary objects using its custom serialization and comparison stack. In the 
following, we show how Flink operates with binary data by the example of 
Flink’s in-memory sort algorithm.&lt;/p&gt;
 
-&lt;p&gt;Flink assigns a memory budget to its data processing operators. Upon 
initialization, a sort algorithm requests its memory budget from the 
MemoryManager and receives a corresponding set of MemorySegments. The set of 
MemorySegments becomes the memory pool of a so-called sort buffer which 
collects the data that is be sorted. The following figure illustrates how data 
objects are serialized into the sort buffer. &lt;/p&gt;
+&lt;p&gt;Flink assigns a memory budget to its data processing operators. Upon 
initialization, a sort algorithm requests its memory budget from the 
MemoryManager and receives a corresponding set of MemorySegments. The set of 
MemorySegments becomes the memory pool of a so-called sort buffer which 
collects the data that is be sorted. The following figure illustrates how data 
objects are serialized into the sort buffer.&lt;/p&gt;
 
 &lt;center&gt;
 &lt;img src=&quot;/img/blog/sorting-binary-data-1.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
@@ -368,7 +368,7 @@ The following figure shows how two objects are 
compared.&lt;/p&gt;
 &lt;img src=&quot;/img/blog/sorting-binary-data-2.png&quot; 
style=&quot;width:80%;margin:15px&quot; /&gt;
 &lt;/center&gt;
 
-&lt;p&gt;The sort buffer compares two elements by comparing their binary 
fix-length sort keys. The comparison is successful if either done on a full key 
(not a prefix key) or if the binary prefix keys are not equal. If the prefix 
keys are equal (or the sort key data type does not provide a binary prefix 
key), the sort buffer follows the pointers to the actual object data, 
deserializes both objects and compares the objects. Depending on the result of 
the comparison, the sort algorithm decides whether to swap the compared 
elements or not. The sort buffer swaps two elements by moving their fix-length 
keys and pointers. The actual data is not moved. Once the sort algorithm 
finishes, the pointers in the sort buffer are correctly ordered. The following 
figure shows how the sorted data is returned from the sort buffer. &lt;/p&gt;
+&lt;p&gt;The sort buffer compares two elements by comparing their binary 
fix-length sort keys. The comparison is successful if either done on a full key 
(not a prefix key) or if the binary prefix keys are not equal. If the prefix 
keys are equal (or the sort key data type does not provide a binary prefix 
key), the sort buffer follows the pointers to the actual object data, 
deserializes both objects and compares the objects. Depending on the result of 
the comparison, the sort algorithm decides whether to swap the compared 
elements or not. The sort buffer swaps two elements by moving their fix-length 
keys and pointers. The actual data is not moved. Once the sort algorithm 
finishes, the pointers in the sort buffer are correctly ordered. The following 
figure shows how the sorted data is returned from the sort buffer.&lt;/p&gt;
 
 &lt;center&gt;
 &lt;img src=&quot;/img/blog/sorting-binary-data-3.png&quot; 
style=&quot;width:80%;margin:15px&quot; /&gt;
@@ -386,7 +386,7 @@ The following figure shows how two objects are 
compared.&lt;/p&gt;
   &lt;li&gt;&lt;strong&gt;Kryo-serialized.&lt;/strong&gt; The tuple fields are 
serialized into a sort buffer of 600 MB size using Kryo serialization and 
sorted without binary sort keys. This means that each pair-wise comparison 
requires two object to be deserialized.&lt;/li&gt;
 &lt;/ol&gt;
 
-&lt;p&gt;All sort methods are implemented using a single thread. The reported 
times are averaged over ten runs. After each run, we call 
&lt;code&gt;System.gc()&lt;/code&gt; to request a garbage collection run which 
does not go into measured execution time. The following figure shows the time 
to store the input data in memory, sort it, and read it back as objects. 
&lt;/p&gt;
+&lt;p&gt;All sort methods are implemented using a single thread. The reported 
times are averaged over ten runs. After each run, we call 
&lt;code&gt;System.gc()&lt;/code&gt; to request a garbage collection run which 
does not go into measured execution time. The following figure shows the time 
to store the input data in memory, sort it, and read it back as 
objects.&lt;/p&gt;
 
 &lt;center&gt;
 &lt;img src=&quot;/img/blog/sort-benchmark.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
@@ -444,13 +444,13 @@ The following figure shows how two objects are 
compared.&lt;/p&gt;
 
 &lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;p&gt;To summarize, the experiments verify the previously stated benefits 
of operating on binary data. &lt;/p&gt;
+&lt;p&gt;To summarize, the experiments verify the previously stated benefits 
of operating on binary data.&lt;/p&gt;
 
 &lt;h2 id=&quot;were-not-done-yet&quot;&gt;We’re not done yet!&lt;/h2&gt;
 
-&lt;p&gt;Apache Flink features quite a bit of advanced techniques to safely 
and efficiently process huge amounts of data with limited memory resources. 
However, there are a few points that could make Flink even more efficient. The 
Flink community is working on moving the managed memory to off-heap memory. 
This will allow for smaller JVMs, lower garbage collection overhead, and also 
easier system configuration. With Flink’s Table API, the semantics of all 
operations such as aggregations and projections are known (in contrast to 
black-box user-defined functions). Hence we can generate code for Table API 
operations that directly operates on binary data. Further improvements include 
serialization layouts which are tailored towards the operations that are 
applied on the binary data and code generation for serializers and comparators. 
&lt;/p&gt;
+&lt;p&gt;Apache Flink features quite a bit of advanced techniques to safely 
and efficiently process huge amounts of data with limited memory resources. 
However, there are a few points that could make Flink even more efficient. The 
Flink community is working on moving the managed memory to off-heap memory. 
This will allow for smaller JVMs, lower garbage collection overhead, and also 
easier system configuration. With Flink’s Table API, the semantics of all 
operations such as aggregations and projections are known (in contrast to 
black-box user-defined functions). Hence we can generate code for Table API 
operations that directly operates on binary data. Further improvements include 
serialization layouts which are tailored towards the operations that are 
applied on the binary data and code generation for serializers and 
comparators.&lt;/p&gt;
 
-&lt;p&gt;The groundwork (and a lot more) for operating on binary data is done 
but there is still some room for making Flink even better and faster. If you 
are crazy about performance and like to juggle with lot of bits and bytes, join 
the Flink community! &lt;/p&gt;
+&lt;p&gt;The groundwork (and a lot more) for operating on binary data is done 
but there is still some room for making Flink even better and faster. If you 
are crazy about performance and like to juggle with lot of bits and bytes, join 
the Flink community!&lt;/p&gt;
 
 &lt;h2 id=&quot;tldr-give-me-three-things-to-remember&quot;&gt;TL;DR; Give me 
three things to remember!&lt;/h2&gt;
 
@@ -804,7 +804,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 &lt;p&gt;In this blog post, we cut through Apache Flink’s layered 
architecture and take a look at its internals with a focus on how it handles 
joins. Specifically, I will&lt;/p&gt;
 
 &lt;ul&gt;
-  &lt;li&gt;show how easy it is to join data sets using Flink’s fluent APIs, 
&lt;/li&gt;
+  &lt;li&gt;show how easy it is to join data sets using Flink’s fluent 
APIs,&lt;/li&gt;
   &lt;li&gt;discuss basic distributed join strategies, Flink’s join 
implementations, and its memory management,&lt;/li&gt;
   &lt;li&gt;talk about Flink’s optimizer that automatically chooses join 
strategies,&lt;/li&gt;
   &lt;li&gt;show some performance numbers for joining data sets of different 
sizes, and finally&lt;/li&gt;
@@ -815,7 +815,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 
 &lt;h3 id=&quot;how-do-i-join-with-flink&quot;&gt;How do I join with 
Flink?&lt;/h3&gt;
 
-&lt;p&gt;Flink provides fluent APIs in Java and Scala to write data flow 
programs. Flink’s APIs are centered around parallel data collections which 
are called data sets. data sets are processed by applying Transformations that 
compute new data sets. Flink’s transformations include Map and Reduce as 
known from MapReduce &lt;a 
href=&quot;http://research.google.com/archive/mapreduce.html&quot;&gt;[1]&lt;/a&gt;
 but also operators for joining, co-grouping, and iterative processing. The 
documentation gives an overview of all available transformations &lt;a 
href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html&quot;&gt;[2]&lt;/a&gt;.
 &lt;/p&gt;
+&lt;p&gt;Flink provides fluent APIs in Java and Scala to write data flow 
programs. Flink’s APIs are centered around parallel data collections which 
are called data sets. data sets are processed by applying Transformations that 
compute new data sets. Flink’s transformations include Map and Reduce as 
known from MapReduce &lt;a 
href=&quot;http://research.google.com/archive/mapreduce.html&quot;&gt;[1]&lt;/a&gt;
 but also operators for joining, co-grouping, and iterative processing. The 
documentation gives an overview of all available transformations &lt;a 
href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html&quot;&gt;[2]&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;Joining two Scala case class data sets is very easy as the following 
example shows:&lt;/p&gt;
 
@@ -852,7 +852,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 
 &lt;ol&gt;
   &lt;li&gt;The data of both inputs is distributed across all parallel 
instances that participate in the join and&lt;/li&gt;
-  &lt;li&gt;each parallel instance performs a standard stand-alone join 
algorithm on its local partition of the overall data. &lt;/li&gt;
+  &lt;li&gt;each parallel instance performs a standard stand-alone join 
algorithm on its local partition of the overall data.&lt;/li&gt;
 &lt;/ol&gt;
 
 &lt;p&gt;The distribution of data across parallel instances must ensure that 
each valid join pair can be locally built by exactly one instance. For both 
steps, there are multiple valid strategies that can be independently picked and 
which are favorable in different situations. In Flink terminology, the first 
phase is called Ship Strategy and the second phase Local Strategy. In the 
following I will describe Flink’s ship and local strategies to join two data 
sets &lt;em&gt;R&lt;/em&gt; and &lt;em&gt;S&lt;/em&gt;.&lt;/p&gt;
@@ -871,7 +871,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 &lt;img src=&quot;/img/blog/joins-repartition.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
 &lt;/center&gt;
 
-&lt;p&gt;The Broadcast-Forward strategy sends one complete data set (R) to 
each parallel instance that holds a partition of the other data set (S), i.e., 
each parallel instance receives the full data set R. Data set S remains local 
and is not shipped at all. The cost of the BF strategy depends on the size of R 
and the number of parallel instances it is shipped to. The size of S does not 
matter because S is not moved. The figure below illustrates how both ship 
strategies work. &lt;/p&gt;
+&lt;p&gt;The Broadcast-Forward strategy sends one complete data set (R) to 
each parallel instance that holds a partition of the other data set (S), i.e., 
each parallel instance receives the full data set R. Data set S remains local 
and is not shipped at all. The cost of the BF strategy depends on the size of R 
and the number of parallel instances it is shipped to. The size of S does not 
matter because S is not moved. The figure below illustrates how both ship 
strategies work.&lt;/p&gt;
 
 &lt;center&gt;
 &lt;img src=&quot;/img/blog/joins-broadcast.png&quot; 
style=&quot;width:90%;margin:15px&quot; /&gt;
@@ -880,7 +880,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 &lt;p&gt;The Repartition-Repartition and Broadcast-Forward ship strategies 
establish suitable data distributions to execute a distributed join. Depending 
on the operations that are applied before the join, one or even both inputs of 
a join are already distributed in a suitable way across parallel instances. In 
this case, Flink will reuse such distributions and only ship one or no input at 
all.&lt;/p&gt;
 
 &lt;h4 id=&quot;flinks-memory-management&quot;&gt;Flink’s Memory 
Management&lt;/h4&gt;
-&lt;p&gt;Before delving into the details of Flink’s local join algorithms, I 
will briefly discuss Flink’s internal memory management. Data processing 
algorithms such as joining, grouping, and sorting need to hold portions of 
their input data in memory. While such algorithms perform best if there is 
enough memory available to hold all data, it is crucial to gracefully handle 
situations where the data size exceeds memory. Such situations are especially 
tricky in JVM-based systems such as Flink because the system needs to reliably 
recognize that it is short on memory. Failure to detect such situations can 
result in an &lt;code&gt;OutOfMemoryException&lt;/code&gt; and kill the JVM. 
&lt;/p&gt;
+&lt;p&gt;Before delving into the details of Flink’s local join algorithms, I 
will briefly discuss Flink’s internal memory management. Data processing 
algorithms such as joining, grouping, and sorting need to hold portions of 
their input data in memory. While such algorithms perform best if there is 
enough memory available to hold all data, it is crucial to gracefully handle 
situations where the data size exceeds memory. Such situations are especially 
tricky in JVM-based systems such as Flink because the system needs to reliably 
recognize that it is short on memory. Failure to detect such situations can 
result in an &lt;code&gt;OutOfMemoryException&lt;/code&gt; and kill the 
JVM.&lt;/p&gt;
 
 &lt;p&gt;Flink handles this challenge by actively managing its memory. When a 
worker node (TaskManager) is started, it allocates a fixed portion (70% by 
default) of the JVM’s heap memory that is available after initialization as 
32KB byte arrays. These byte arrays are distributed as working memory to all 
algorithms that need to hold significant portions of data in memory. The 
algorithms receive their input data as Java data objects and serialize them 
into their working memory.&lt;/p&gt;
 
@@ -897,7 +897,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 &lt;p&gt;After the data has been distributed across all parallel join 
instances using either a Repartition-Repartition or Broadcast-Forward ship 
strategy, each instance runs a local join algorithm to join the elements of its 
local partition. Flink’s runtime features two common join strategies to 
perform these local joins:&lt;/p&gt;
 
 &lt;ul&gt;
-  &lt;li&gt;the &lt;em&gt;Sort-Merge-Join&lt;/em&gt; strategy (SM) and 
&lt;/li&gt;
+  &lt;li&gt;the &lt;em&gt;Sort-Merge-Join&lt;/em&gt; strategy (SM) 
and&lt;/li&gt;
   &lt;li&gt;the &lt;em&gt;Hybrid-Hash-Join&lt;/em&gt; strategy (HH).&lt;/li&gt;
 &lt;/ul&gt;
 
@@ -942,13 +942,13 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 &lt;ul&gt;
   &lt;li&gt;1GB     : 1000GB&lt;/li&gt;
   &lt;li&gt;10GB    : 1000GB&lt;/li&gt;
-  &lt;li&gt;100GB   : 1000GB &lt;/li&gt;
+  &lt;li&gt;100GB   : 1000GB&lt;/li&gt;
   &lt;li&gt;1000GB  : 1000GB&lt;/li&gt;
 &lt;/ul&gt;
 
 &lt;p&gt;The Broadcast-Forward strategy is only executed for up to 10GB. 
Building a hash table from 100GB broadcasted data in 5GB working memory would 
result in spilling proximately 95GB (build input) + 950GB (probe input) in each 
parallel thread and require more than 8TB local disk storage on each 
machine.&lt;/p&gt;
 
-&lt;p&gt;As in the single-core benchmark, we run 1:N joins, generate the data 
on-the-fly, and immediately discard the result after the join. We run the 
benchmark on 10 n1-highmem-8 Google Compute Engine instances. Each instance is 
equipped with 8 cores, 52GB RAM, 40GB of which are configured as working memory 
(5GB per core), and one local SSD for spilling to disk. All benchmarks are 
performed using the same configuration, i.e., no fine tuning for the respective 
data sizes is done. The programs are executed with a parallelism of 80. 
&lt;/p&gt;
+&lt;p&gt;As in the single-core benchmark, we run 1:N joins, generate the data 
on-the-fly, and immediately discard the result after the join. We run the 
benchmark on 10 n1-highmem-8 Google Compute Engine instances. Each instance is 
equipped with 8 cores, 52GB RAM, 40GB of which are configured as working memory 
(5GB per core), and one local SSD for spilling to disk. All benchmarks are 
performed using the same configuration, i.e., no fine tuning for the respective 
data sizes is done. The programs are executed with a parallelism of 
80.&lt;/p&gt;
 
 &lt;center&gt;
 &lt;img src=&quot;/img/blog/joins-dist-perf.png&quot; 
style=&quot;width:70%;margin:15px&quot; /&gt;
@@ -965,7 +965,7 @@ Tez as an execution backend instead of Flink’s own 
network stack. Learn more
 &lt;ul&gt;
   &lt;li&gt;Flink’s fluent Scala and Java APIs make joins and other data 
transformations easy as cake.&lt;/li&gt;
   &lt;li&gt;The optimizer does the hard choices for you, but gives you control 
in case you know better.&lt;/li&gt;
-  &lt;li&gt;Flink’s join implementations perform very good in-memory and 
gracefully degrade when going to disk. &lt;/li&gt;
+  &lt;li&gt;Flink’s join implementations perform very good in-memory and 
gracefully degrade when going to disk.&lt;/li&gt;
   &lt;li&gt;Due to Flink’s robust memory management, there is no need for 
job- or data-specific memory tuning to avoid a nasty 
&lt;code&gt;OutOfMemoryException&lt;/code&gt;. It just runs 
out-of-the-box.&lt;/li&gt;
 &lt;/ul&gt;
 
@@ -1136,7 +1136,7 @@ found &lt;a 
href=&quot;https://github.com/mbalassi/flink/blob/stockprices/flink-
   &lt;li&gt;Read a socket stream of stock prices&lt;/li&gt;
   &lt;li&gt;Parse the text in the stream to create a stream of 
&lt;code&gt;StockPrice&lt;/code&gt; objects&lt;/li&gt;
   &lt;li&gt;Add four other sources tagged with the stock symbol.&lt;/li&gt;
-  &lt;li&gt;Finally, merge the streams to create a unified stream. &lt;/li&gt;
+  &lt;li&gt;Finally, merge the streams to create a unified stream.&lt;/li&gt;
 &lt;/ol&gt;
 
 &lt;p&gt;&lt;img alt=&quot;Reading from multiple inputs&quot; 
src=&quot;/img/blog/blog_multi_input.png&quot; width=&quot;70%&quot; 
class=&quot;img-responsive center-block&quot; /&gt;&lt;/p&gt;
@@ -1608,7 +1608,7 @@ number of mentions of a given stock in the Twitter 
stream. As both of
 these data streams are potentially infinite, we apply the join on a
 30-second window.&lt;/p&gt;
 
-&lt;p&gt;&lt;img alt=&quot;Streaming joins&quot; 
src=&quot;/img/blog/blog_stream_join.png&quot; width=&quot;60%&quot; 
class=&quot;img-responsive center-block&quot; /&gt; &lt;/p&gt;
+&lt;p&gt;&lt;img alt=&quot;Streaming joins&quot; 
src=&quot;/img/blog/blog_stream_join.png&quot; width=&quot;60%&quot; 
class=&quot;img-responsive center-block&quot; /&gt;&lt;/p&gt;
 
 &lt;div class=&quot;codetabs&quot;&gt;
 
@@ -1777,7 +1777,7 @@ internally, fault tolerance, and performance 
measurements!&lt;/p&gt;
 
 &lt;h3 
id=&quot;using-off-heap-memoryhttpsgithubcomapacheflinkpull290&quot;&gt;&lt;a 
href=&quot;https://github.com/apache/flink/pull/290&quot;&gt;Using off-heap 
memory&lt;/a&gt;&lt;/h3&gt;
 
-&lt;p&gt;This pull request enables Flink to use off-heap memory for its 
internal memory uses (sort, hash, caching of intermediate data sets). &lt;/p&gt;
+&lt;p&gt;This pull request enables Flink to use off-heap memory for its 
internal memory uses (sort, hash, caching of intermediate data sets).&lt;/p&gt;
 
 &lt;h3 
id=&quot;gelly-flinks-graph-apihttpsgithubcomapacheflinkpull335&quot;&gt;&lt;a 
href=&quot;https://github.com/apache/flink/pull/335&quot;&gt;Gelly, Flink’s 
Graph API&lt;/a&gt;&lt;/h3&gt;
 
@@ -1849,7 +1849,7 @@ internally, fault tolerance, and performance 
measurements!&lt;/p&gt;
   &lt;li&gt;Stefan Bunk&lt;/li&gt;
   &lt;li&gt;Paris Carbone&lt;/li&gt;
   &lt;li&gt;Ufuk Celebi&lt;/li&gt;
-  &lt;li&gt;Nils Engelbach &lt;/li&gt;
+  &lt;li&gt;Nils Engelbach&lt;/li&gt;
   &lt;li&gt;Stephan Ewen&lt;/li&gt;
   &lt;li&gt;Gyula Fora&lt;/li&gt;
   &lt;li&gt;Gabor Hermann&lt;/li&gt;
@@ -1954,7 +1954,7 @@ Flink serialization system improved a lot over time and 
by now surpasses the cap
 &lt;img src=&quot;/img/blog/hcompat-logos.png&quot; 
style=&quot;width:30%;margin:15px&quot; /&gt;
 &lt;/center&gt;
 
-&lt;p&gt;To close this gap, Flink provides a Hadoop Compatibility package to 
wrap functions implemented against Hadoop’s MapReduce interfaces and embed 
them in Flink programs. This package was developed as part of a &lt;a 
href=&quot;https://developers.google.com/open-source/soc/&quot;&gt;Google 
Summer of Code&lt;/a&gt; 2014 project. &lt;/p&gt;
+&lt;p&gt;To close this gap, Flink provides a Hadoop Compatibility package to 
wrap functions implemented against Hadoop’s MapReduce interfaces and embed 
them in Flink programs. This package was developed as part of a &lt;a 
href=&quot;https://developers.google.com/open-source/soc/&quot;&gt;Google 
Summer of Code&lt;/a&gt; 2014 project.&lt;/p&gt;
 
 &lt;p&gt;With the Hadoop Compatibility package, you can reuse all your 
Hadoop&lt;/p&gt;
 
@@ -1967,7 +1967,7 @@ Flink serialization system improved a lot over time and 
by now surpasses the cap
 
 &lt;p&gt;in Flink programs without changing a line of code. Moreover, Flink 
also natively supports all Hadoop data types 
(&lt;code&gt;Writables&lt;/code&gt; and 
&lt;code&gt;WritableComparable&lt;/code&gt;).&lt;/p&gt;
 
-&lt;p&gt;The following code snippet shows a simple Flink WordCount program 
that solely uses Hadoop data types, InputFormat, OutputFormat, Mapper, and 
Reducer functions. &lt;/p&gt;
+&lt;p&gt;The following code snippet shows a simple Flink WordCount program 
that solely uses Hadoop data types, InputFormat, OutputFormat, Mapper, and 
Reducer functions.&lt;/p&gt;
 
 &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code 
class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// 
Definition of Hadoop Mapper function&lt;/span&gt;
 &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span 
class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span 
class=&quot;nc&quot;&gt;Tokenizer&lt;/span&gt; &lt;span 
class=&quot;kd&quot;&gt;implements&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;Mapper&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span 
class=&quot;n&quot;&gt;LongWritable&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;Text&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;Text&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span 
class=&quot;n&quot;&gt;LongWritable&lt;/span&gt;&lt;span 
class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;{&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;...&lt;/span&gt; &lt;span 
class=&quot;o&quot;&gt;}&lt;/span&gt;
@@ -2053,7 +2053,7 @@ Flink serialization system improved a lot over time and 
by now surpasses the cap
 
 &lt;p&gt;&lt;strong&gt;Record API deprecated:&lt;/strong&gt; The (old) 
Stratosphere Record API has been marked as deprecated and is planned for 
removal in the 0.9.0 release.&lt;/p&gt;
 
-&lt;p&gt;&lt;strong&gt;BLOB service:&lt;/strong&gt; This release contains a 
new service to distribute jar files and other binary data among the JobManager, 
TaskManagers and the client. &lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;BLOB service:&lt;/strong&gt; This release contains a 
new service to distribute jar files and other binary data among the JobManager, 
TaskManagers and the client.&lt;/p&gt;
 
 &lt;p&gt;&lt;strong&gt;Intermediate data sets:&lt;/strong&gt; A major rewrite 
of the system internals introduces intermediate data sets as first class 
citizens. The internal state machine that tracks the distributed tasks has also 
been completely rewritten for scalability. While this is not visible as a 
user-facing feature yet, it is the foundation for several upcoming exciting 
features.&lt;/p&gt;
 
@@ -2489,7 +2489,7 @@ Applying students can use our wiki (create a new page) to 
create a project propo
 ssh [email protected] -i 
~/Downloads/work-laptop.pem&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
 &lt;p&gt;(Windows users have to follow &lt;a 
href=&quot;http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-connect-master-node-ssh.html&quot;&gt;these
 instructions&lt;/a&gt; to SSH into the machine running the master.) 
&amp;lt;/br&amp;gt;&amp;lt;/br&amp;gt;
-Once connected to the master, download and start Stratosphere for YARN: 
&lt;/p&gt;
+Once connected to the master, download and start Stratosphere for 
YARN:&lt;/p&gt;
 &lt;ul&gt;
        &lt;li&gt;Download and extract Stratosphere-YARN&lt;/li&gt;
 
@@ -2512,11 +2512,11 @@ The arguments have the following meaning
        &lt;/ul&gt;
 &lt;/ul&gt;
 
-&lt;p&gt;Once the output has changed from &lt;/p&gt;
+&lt;p&gt;Once the output has changed from&lt;/p&gt;
 
 &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code 
class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;JobManager is now 
running on N/A:6123&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-&lt;p&gt;to &lt;/p&gt;
+&lt;p&gt;to&lt;/p&gt;
 
 &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code 
class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;JobManager is now 
running on 
ip-172-31-13-68.us-west-2.compute.internal:6123&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
@@ -2738,7 +2738,7 @@ You can now press the “Run” button and see how 
Stratosphere executes the lit
 
 <item>
 <title>Stratosphere 0.4 Released</title>
-<description>&lt;p&gt;We are pleased to announce that version 0.4 of the 
Stratosphere system has been released. &lt;/p&gt;
+<description>&lt;p&gt;We are pleased to announce that version 0.4 of the 
Stratosphere system has been released.&lt;/p&gt;
 
 &lt;p&gt;Our team has been working hard during the last few months to create 
an improved and stable Stratosphere version. The new version comes with many 
new features, usability and performance improvements in all levels, including a 
new Scala API for the concise specification of programs, a Pregel-like API, 
support for Yarn clusters, and major performance improvements. The system 
features now first-class support for iterative programs and thus covers 
traditional analytical use cases as well as data mining and graph processing 
use cases with great performance.&lt;/p&gt;
 
@@ -2766,7 +2766,7 @@ Follow &lt;a 
href=&quot;/docs/0.4/setup/yarn.html&quot;&gt;our guide&lt;/a&gt; o
 &lt;p&gt;The high-level language Meteor now natively serializes JSON trees for 
greater performance and offers additional operators and file formats. We 
greatly empowered the user to write crispier scripts by adding second-order 
functions, multi-output operators, and other syntactical sugar. For developers 
of Meteor packages, the API is much more comprehensive and allows to define 
custom data types that can be easily embedded in JSON trees through ad-hoc byte 
code generation.&lt;/p&gt;
 
 &lt;h3 id=&quot;spargel-pregel-inspired-graph-processing&quot;&gt;Spargel: 
Pregel Inspired Graph Processing&lt;/h3&gt;
-&lt;p&gt;Spargel is a vertex-centric API similar to the interface proposed in 
Google’s Pregel paper and implemented in Apache Giraph. Spargel is 
implemented in 500 lines of code (including comments) on top of 
Stratosphere’s delta iterations feature. This confirms the flexibility of 
Stratosphere’s architecture. &lt;/p&gt;
+&lt;p&gt;Spargel is a vertex-centric API similar to the interface proposed in 
Google’s Pregel paper and implemented in Apache Giraph. Spargel is 
implemented in 500 lines of code (including comments) on top of 
Stratosphere’s delta iterations feature. This confirms the flexibility of 
Stratosphere’s architecture.&lt;/p&gt;
 
 &lt;h3 id=&quot;web-frontend&quot;&gt;Web Frontend&lt;/h3&gt;
 &lt;p&gt;Using the new web frontend, you can monitor the progress of 
Stratosphere jobs. For finished jobs, the frontend shows a breakdown of the 
execution times for each operator. The webclient also visualizes the execution 
strategies chosen by the optimizer.&lt;/p&gt;
@@ -2794,7 +2794,7 @@ Follow &lt;a 
href=&quot;/docs/0.4/setup/yarn.html&quot;&gt;our guide&lt;/a&gt; o
 &lt;/ul&gt;
 
 &lt;h3 
id=&quot;download-and-get-started-with-stratosphere-v04&quot;&gt;Download and 
get started with Stratosphere v0.4&lt;/h3&gt;
-&lt;p&gt;There are several options for getting started with Stratosphere. 
&lt;/p&gt;
+&lt;p&gt;There are several options for getting started with 
Stratosphere.&lt;/p&gt;
 
 &lt;ul&gt;
   &lt;li&gt;Download it on the &lt;a href=&quot;/downloads&quot;&gt;download 
page&lt;/a&gt;&lt;/li&gt;

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/blog/page3/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index 441e662..fbecfe7 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -175,7 +175,7 @@
       <h2 class="blog-title"><a 
href="/news/2014/01/13/stratosphere-release-0.4.html">Stratosphere 0.4 
Released</a></h2>
       <p>13 Jan 2014</p>
 
-      <p><p>We are pleased to announce that version 0.4 of the Stratosphere 
system has been released. </p>
+      <p><p>We are pleased to announce that version 0.4 of the Stratosphere 
system has been released.</p>
 
 </p>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/community.html
----------------------------------------------------------------------
diff --git a/content/community.html b/content/community.html
index ce85100..e32d442 100644
--- a/content/community.html
+++ b/content/community.html
@@ -147,17 +147,17 @@
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a href="#mailing-lists">Mailing Lists</a></li>
-  <li><a href="#irc">IRC</a></li>
-  <li><a href="#stack-overflow">Stack Overflow</a></li>
-  <li><a href="#issue-tracker">Issue Tracker</a></li>
-  <li><a href="#source-code">Source Code</a>    <ul>
-      <li><a href="#main-source-repositories">Main source repositories</a></li>
-      <li><a href="#website-repositories">Website repositories</a></li>
+  <li><a href="#mailing-lists" id="markdown-toc-mailing-lists">Mailing 
Lists</a></li>
+  <li><a href="#irc" id="markdown-toc-irc">IRC</a></li>
+  <li><a href="#stack-overflow" id="markdown-toc-stack-overflow">Stack 
Overflow</a></li>
+  <li><a href="#issue-tracker" id="markdown-toc-issue-tracker">Issue 
Tracker</a></li>
+  <li><a href="#source-code" id="markdown-toc-source-code">Source Code</a>    
<ul>
+      <li><a href="#main-source-repositories" 
id="markdown-toc-main-source-repositories">Main source repositories</a></li>
+      <li><a href="#website-repositories" 
id="markdown-toc-website-repositories">Website repositories</a></li>
     </ul>
   </li>
-  <li><a href="#people">People</a></li>
-  <li><a href="#former-mentors">Former mentors</a></li>
+  <li><a href="#people" id="markdown-toc-people">People</a></li>
+  <li><a href="#former-mentors" id="markdown-toc-former-mentors">Former 
mentors</a></li>
 </ul>
 
 </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/downloads.html
----------------------------------------------------------------------
diff --git a/content/downloads.html b/content/downloads.html
index 26167a6..566da2e 100644
--- a/content/downloads.html
+++ b/content/downloads.html
@@ -156,9 +156,9 @@ $( document ).ready(function() {
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a href="#latest-stable-release-v090">Latest stable release 
(v0.9.0)</a></li>
-  <li><a href="#maven-dependencies">Maven Dependencies</a></li>
-  <li><a href="#all-releases">All releases</a></li>
+  <li><a href="#latest-stable-release-v090" 
id="markdown-toc-latest-stable-release-v090">Latest stable release 
(v0.9.0)</a></li>
+  <li><a href="#maven-dependencies" id="markdown-toc-maven-dependencies">Maven 
Dependencies</a></li>
+  <li><a href="#all-releases" id="markdown-toc-all-releases">All 
releases</a></li>
 </ul>
 
 </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/faq.html
----------------------------------------------------------------------
diff --git a/content/faq.html b/content/faq.html
index 85b5f81..b23114b 100644
--- a/content/faq.html
+++ b/content/faq.html
@@ -166,40 +166,40 @@ under the License.
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a href="#general">General</a>    <ul>
-      <li><a href="#is-flink-a-hadoop-project">Is Flink a Hadoop 
Project?</a></li>
-      <li><a href="#do-i-have-to-install-apache-hadoop-to-use-flink">Do I have 
to install Apache Hadoop to use Flink?</a></li>
+  <li><a href="#general" id="markdown-toc-general">General</a>    <ul>
+      <li><a href="#is-flink-a-hadoop-project" 
id="markdown-toc-is-flink-a-hadoop-project">Is Flink a Hadoop Project?</a></li>
+      <li><a href="#do-i-have-to-install-apache-hadoop-to-use-flink" 
id="markdown-toc-do-i-have-to-install-apache-hadoop-to-use-flink">Do I have to 
install Apache Hadoop to use Flink?</a></li>
     </ul>
   </li>
-  <li><a href="#usage">Usage</a>    <ul>
-      <li><a href="#how-do-i-assess-the-progress-of-a-flink-program">How do I 
assess the progress of a Flink program?</a></li>
-      <li><a href="#how-can-i-figure-out-why-a-program-failed">How can I 
figure out why a program failed?</a></li>
-      <li><a href="#how-do-i-debug-flink-programs">How do I debug Flink 
programs?</a></li>
-      <li><a href="#what-is-the-parallelism-how-do-i-set-it">What is the 
parallelism? How do I set it?</a></li>
+  <li><a href="#usage" id="markdown-toc-usage">Usage</a>    <ul>
+      <li><a href="#how-do-i-assess-the-progress-of-a-flink-program" 
id="markdown-toc-how-do-i-assess-the-progress-of-a-flink-program">How do I 
assess the progress of a Flink program?</a></li>
+      <li><a href="#how-can-i-figure-out-why-a-program-failed" 
id="markdown-toc-how-can-i-figure-out-why-a-program-failed">How can I figure 
out why a program failed?</a></li>
+      <li><a href="#how-do-i-debug-flink-programs" 
id="markdown-toc-how-do-i-debug-flink-programs">How do I debug Flink 
programs?</a></li>
+      <li><a href="#what-is-the-parallelism-how-do-i-set-it" 
id="markdown-toc-what-is-the-parallelism-how-do-i-set-it">What is the 
parallelism? How do I set it?</a></li>
     </ul>
   </li>
-  <li><a href="#errors">Errors</a>    <ul>
-      <li><a href="#why-am-i-getting-a-nonserializableexception-">Why am I 
getting a “NonSerializableException” ?</a></li>
-      <li><a 
href="#in-scala-api-i-get-an-error-about-implicit-values-and-evidence-parameters">In
 Scala API, I get an error about implicit values and evidence 
parameters</a></li>
-      <li><a 
href="#i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this">I
 get an error message saying that not enough buffers are available. How do I 
fix this?</a></li>
-      <li><a 
href="#my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause">My 
job fails early with a java.io.EOFException. What could be the cause?</a></li>
-      <li><a 
href="#my-job-fails-with-various-exceptions-from-the-hdfshadoop-code-what-can-i-do">My
 job fails with various exceptions from the HDFS/Hadoop code. What can I 
do?</a></li>
-      <li><a 
href="#in-eclipse-i-get-compilation-errors-in-the-scala-projects">In Eclipse, I 
get compilation errors in the Scala projects</a></li>
-      <li><a 
href="#my-program-does-not-compute-the-correct-result-why-are-my-custom-key-types">My
 program does not compute the correct result. Why are my custom key 
types</a></li>
-      <li><a 
href="#i-get-a-javalanginstantiationexception-for-my-data-type-what-is-wrong">I 
get a java.lang.InstantiationException for my data type, what is wrong?</a></li>
-      <li><a 
href="#i-cant-stop-flink-with-the-provided-stop-scripts-what-can-i-do">I 
can’t stop Flink with the provided stop-scripts. What can I do?</a></li>
-      <li><a href="#i-got-an-outofmemoryexception-what-can-i-do">I got an 
OutOfMemoryException. What can I do?</a></li>
-      <li><a href="#why-do-the-taskmanager-log-files-become-so-huge">Why do 
the TaskManager log files become so huge?</a></li>
+  <li><a href="#errors" id="markdown-toc-errors">Errors</a>    <ul>
+      <li><a href="#why-am-i-getting-a-nonserializableexception-" 
id="markdown-toc-why-am-i-getting-a-nonserializableexception-">Why am I getting 
a “NonSerializableException” ?</a></li>
+      <li><a 
href="#in-scala-api-i-get-an-error-about-implicit-values-and-evidence-parameters"
 
id="markdown-toc-in-scala-api-i-get-an-error-about-implicit-values-and-evidence-parameters">In
 Scala API, I get an error about implicit values and evidence 
parameters</a></li>
+      <li><a 
href="#i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this"
 
id="markdown-toc-i-get-an-error-message-saying-that-not-enough-buffers-are-available-how-do-i-fix-this">I
 get an error message saying that not enough buffers are available. How do I 
fix this?</a></li>
+      <li><a 
href="#my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause" 
id="markdown-toc-my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause">My
 job fails early with a java.io.EOFException. What could be the cause?</a></li>
+      <li><a 
href="#my-job-fails-with-various-exceptions-from-the-hdfshadoop-code-what-can-i-do"
 
id="markdown-toc-my-job-fails-with-various-exceptions-from-the-hdfshadoop-code-what-can-i-do">My
 job fails with various exceptions from the HDFS/Hadoop code. What can I 
do?</a></li>
+      <li><a href="#in-eclipse-i-get-compilation-errors-in-the-scala-projects" 
id="markdown-toc-in-eclipse-i-get-compilation-errors-in-the-scala-projects">In 
Eclipse, I get compilation errors in the Scala projects</a></li>
+      <li><a 
href="#my-program-does-not-compute-the-correct-result-why-are-my-custom-key-types"
 
id="markdown-toc-my-program-does-not-compute-the-correct-result-why-are-my-custom-key-types">My
 program does not compute the correct result. Why are my custom key 
types</a></li>
+      <li><a 
href="#i-get-a-javalanginstantiationexception-for-my-data-type-what-is-wrong" 
id="markdown-toc-i-get-a-javalanginstantiationexception-for-my-data-type-what-is-wrong">I
 get a java.lang.InstantiationException for my data type, what is 
wrong?</a></li>
+      <li><a 
href="#i-cant-stop-flink-with-the-provided-stop-scripts-what-can-i-do" 
id="markdown-toc-i-cant-stop-flink-with-the-provided-stop-scripts-what-can-i-do">I
 can’t stop Flink with the provided stop-scripts. What can I do?</a></li>
+      <li><a href="#i-got-an-outofmemoryexception-what-can-i-do" 
id="markdown-toc-i-got-an-outofmemoryexception-what-can-i-do">I got an 
OutOfMemoryException. What can I do?</a></li>
+      <li><a href="#why-do-the-taskmanager-log-files-become-so-huge" 
id="markdown-toc-why-do-the-taskmanager-log-files-become-so-huge">Why do the 
TaskManager log files become so huge?</a></li>
     </ul>
   </li>
-  <li><a href="#yarn-deployment">YARN Deployment</a>    <ul>
-      <li><a href="#the-yarn-session-runs-only-for-a-few-seconds">The YARN 
session runs only for a few seconds</a></li>
-      <li><a 
href="#the-yarn-session-crashes-with-a-hdfs-permission-exception-during-startup">The
 YARN session crashes with a HDFS permission exception during startup</a></li>
+  <li><a href="#yarn-deployment" id="markdown-toc-yarn-deployment">YARN 
Deployment</a>    <ul>
+      <li><a href="#the-yarn-session-runs-only-for-a-few-seconds" 
id="markdown-toc-the-yarn-session-runs-only-for-a-few-seconds">The YARN session 
runs only for a few seconds</a></li>
+      <li><a 
href="#the-yarn-session-crashes-with-a-hdfs-permission-exception-during-startup"
 
id="markdown-toc-the-yarn-session-crashes-with-a-hdfs-permission-exception-during-startup">The
 YARN session crashes with a HDFS permission exception during startup</a></li>
     </ul>
   </li>
-  <li><a href="#features">Features</a>    <ul>
-      <li><a href="#what-kind-of-fault-tolerance-does-flink-provide">What kind 
of fault-tolerance does Flink provide?</a></li>
-      <li><a 
href="#are-hadoop-like-utilities-such-as-counters-and-the-distributedcache-supported">Are
 Hadoop-like utilities, such as Counters and the DistributedCache 
supported?</a></li>
+  <li><a href="#features" id="markdown-toc-features">Features</a>    <ul>
+      <li><a href="#what-kind-of-fault-tolerance-does-flink-provide" 
id="markdown-toc-what-kind-of-fault-tolerance-does-flink-provide">What kind of 
fault-tolerance does Flink provide?</a></li>
+      <li><a 
href="#are-hadoop-like-utilities-such-as-counters-and-the-distributedcache-supported"
 
id="markdown-toc-are-hadoop-like-utilities-such-as-counters-and-the-distributedcache-supported">Are
 Hadoop-like utilities, such as Counters and the DistributedCache 
supported?</a></li>
     </ul>
   </li>
 </ul>
@@ -429,7 +429,7 @@ cluster.sh</code>). You can kill their processes on 
Linux/Mac as follows:</p>
 <ul>
   <li>Determine the process id (pid) of the JobManager / TaskManager process. 
You
 can use the <code>jps</code> command on Linux(if you have OpenJDK installed) 
or command
-<code>ps -ef | grep java</code> to find all Java processes. </li>
+<code>ps -ef | grep java</code> to find all Java processes.</li>
   <li>Kill the process with <code>kill -9 &lt;pid&gt;</code>, where 
<code>pid</code> is the process id of the
 affected JobManager or TaskManager process.</li>
 </ul>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/features.html
----------------------------------------------------------------------
diff --git a/content/features.html b/content/features.html
index 6e03303..297e3d0 100644
--- a/content/features.html
+++ b/content/features.html
@@ -367,7 +367,7 @@
 <div class="row">
   <div class="col-sm-5">
     <p class="lead">The <i>DataStream</i> API supports functional 
transformations on data streams, with user-defined state, and flexible 
windows.</p>
-    <p class="lead">The example shows how to compute a sliding historam of 
word occurrences of a data stream of texts.</p>
+    <p class="lead">The example shows how to compute a sliding histogram of 
word occurrences of a data stream of texts.</p>
   </div>
   <div class="col-sm-7">
     <p class="lead">WindowWordCount in Flink's DataStream API</p>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/how-to-contribute.html
----------------------------------------------------------------------
diff --git a/content/how-to-contribute.html b/content/how-to-contribute.html
index 68208ac..9bfb253 100644
--- a/content/how-to-contribute.html
+++ b/content/how-to-contribute.html
@@ -147,22 +147,22 @@
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a href="#easy-issues-for-starters">Easy Issues for Starters</a></li>
-  <li><a href="#contributing-code--documentation">Contributing Code &amp; 
Documentation</a>    <ul>
-      <li><a 
href="#setting-up-the-infrastructure-and-creating-a-pull-request">Setting up 
the Infrastructure and Creating a Pull Request</a></li>
-      <li><a href="#verifying-the-compliance-of-your-code">Verifying the 
Compliance of your Code</a></li>
+  <li><a href="#easy-issues-for-starters" 
id="markdown-toc-easy-issues-for-starters">Easy Issues for Starters</a></li>
+  <li><a href="#contributing-code--documentation" 
id="markdown-toc-contributing-code--documentation">Contributing Code &amp; 
Documentation</a>    <ul>
+      <li><a href="#setting-up-the-infrastructure-and-creating-a-pull-request" 
id="markdown-toc-setting-up-the-infrastructure-and-creating-a-pull-request">Setting
 up the Infrastructure and Creating a Pull Request</a></li>
+      <li><a href="#verifying-the-compliance-of-your-code" 
id="markdown-toc-verifying-the-compliance-of-your-code">Verifying the 
Compliance of your Code</a></li>
     </ul>
   </li>
-  <li><a href="#contribute-changes-to-the-website">Contribute changes to the 
Website</a>    <ul>
-      <li><a href="#files-and-directories-in-the-website-git-repository">Files 
and Directories in the website git repository</a></li>
-      <li><a href="#the-buildsh-script">The <code>build.sh</code> 
script</a></li>
+  <li><a href="#contribute-changes-to-the-website" 
id="markdown-toc-contribute-changes-to-the-website">Contribute changes to the 
Website</a>    <ul>
+      <li><a href="#files-and-directories-in-the-website-git-repository" 
id="markdown-toc-files-and-directories-in-the-website-git-repository">Files and 
Directories in the website git repository</a></li>
+      <li><a href="#the-buildsh-script" 
id="markdown-toc-the-buildsh-script">The <code>build.sh</code> script</a></li>
     </ul>
   </li>
-  <li><a href="#how-to-become-a-committer">How to become a committer</a>    
<ul>
-      <li><a href="#how-to-use-git-as-a-committer">How to use git as a 
committer</a></li>
+  <li><a href="#how-to-become-a-committer" 
id="markdown-toc-how-to-become-a-committer">How to become a committer</a>    
<ul>
+      <li><a href="#how-to-use-git-as-a-committer" 
id="markdown-toc-how-to-use-git-as-a-committer">How to use git as a 
committer</a></li>
     </ul>
   </li>
-  <li><a href="#snapshots-nightly-builds">Snapshots (Nightly Builds)</a></li>
+  <li><a href="#snapshots-nightly-builds" 
id="markdown-toc-snapshots-nightly-builds">Snapshots (Nightly Builds)</a></li>
 </ul>
 
 </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/material.html
----------------------------------------------------------------------
diff --git a/content/material.html b/content/material.html
index 5f5a616..e79b068 100644
--- a/content/material.html
+++ b/content/material.html
@@ -145,13 +145,13 @@
 
 <div class="page-toc">
 <ul id="markdown-toc">
-  <li><a href="#apache-flink-logos">Apache Flink Logos</a>    <ul>
-      <li><a href="#portable-network-graphics-png">Portable Network Graphics 
(PNG)</a></li>
-      <li><a href="#scalable-vector-graphics-svg">Scalable Vector Graphics 
(SVG)</a></li>
-      <li><a href="#photoshop-psd">Photoshop (PSD)</a></li>
+  <li><a href="#apache-flink-logos" 
id="markdown-toc-apache-flink-logos">Apache Flink Logos</a>    <ul>
+      <li><a href="#portable-network-graphics-png" 
id="markdown-toc-portable-network-graphics-png">Portable Network Graphics 
(PNG)</a></li>
+      <li><a href="#scalable-vector-graphics-svg" 
id="markdown-toc-scalable-vector-graphics-svg">Scalable Vector Graphics 
(SVG)</a></li>
+      <li><a href="#photoshop-psd" id="markdown-toc-photoshop-psd">Photoshop 
(PSD)</a></li>
     </ul>
   </li>
-  <li><a href="#slides">Slides</a></li>
+  <li><a href="#slides" id="markdown-toc-slides">Slides</a></li>
 </ul>
 
 </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2014/01/13/stratosphere-release-0.4.html
----------------------------------------------------------------------
diff --git a/content/news/2014/01/13/stratosphere-release-0.4.html 
b/content/news/2014/01/13/stratosphere-release-0.4.html
index 1be3a53..9845afe 100644
--- a/content/news/2014/01/13/stratosphere-release-0.4.html
+++ b/content/news/2014/01/13/stratosphere-release-0.4.html
@@ -145,7 +145,7 @@
       <article>
         <p>13 Jan 2014</p>
 
-<p>We are pleased to announce that version 0.4 of the Stratosphere system has 
been released. </p>
+<p>We are pleased to announce that version 0.4 of the Stratosphere system has 
been released.</p>
 
 <p>Our team has been working hard during the last few months to create an 
improved and stable Stratosphere version. The new version comes with many new 
features, usability and performance improvements in all levels, including a new 
Scala API for the concise specification of programs, a Pregel-like API, support 
for Yarn clusters, and major performance improvements. The system features now 
first-class support for iterative programs and thus covers traditional 
analytical use cases as well as data mining and graph processing use cases with 
great performance.</p>
 
@@ -173,7 +173,7 @@ Follow <a href="/docs/0.4/setup/yarn.html">our guide</a> on 
how to start a Strat
 <p>The high-level language Meteor now natively serializes JSON trees for 
greater performance and offers additional operators and file formats. We 
greatly empowered the user to write crispier scripts by adding second-order 
functions, multi-output operators, and other syntactical sugar. For developers 
of Meteor packages, the API is much more comprehensive and allows to define 
custom data types that can be easily embedded in JSON trees through ad-hoc byte 
code generation.</p>
 
 <h3 id="spargel-pregel-inspired-graph-processing">Spargel: Pregel Inspired 
Graph Processing</h3>
-<p>Spargel is a vertex-centric API similar to the interface proposed in 
Google’s Pregel paper and implemented in Apache Giraph. Spargel is 
implemented in 500 lines of code (including comments) on top of 
Stratosphere’s delta iterations feature. This confirms the flexibility of 
Stratosphere’s architecture. </p>
+<p>Spargel is a vertex-centric API similar to the interface proposed in 
Google’s Pregel paper and implemented in Apache Giraph. Spargel is 
implemented in 500 lines of code (including comments) on top of 
Stratosphere’s delta iterations feature. This confirms the flexibility of 
Stratosphere’s architecture.</p>
 
 <h3 id="web-frontend">Web Frontend</h3>
 <p>Using the new web frontend, you can monitor the progress of Stratosphere 
jobs. For finished jobs, the frontend shows a breakdown of the execution times 
for each operator. The webclient also visualizes the execution strategies 
chosen by the optimizer.</p>
@@ -201,7 +201,7 @@ Follow <a href="/docs/0.4/setup/yarn.html">our guide</a> on 
how to start a Strat
 </ul>
 
 <h3 id="download-and-get-started-with-stratosphere-v04">Download and get 
started with Stratosphere v0.4</h3>
-<p>There are several options for getting started with Stratosphere. </p>
+<p>There are several options for getting started with Stratosphere.</p>
 
 <ul>
   <li>Download it on the <a href="/downloads">download page</a></li>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2014/02/18/amazon-elastic-mapreduce-cloud-yarn.html
----------------------------------------------------------------------
diff --git a/content/news/2014/02/18/amazon-elastic-mapreduce-cloud-yarn.html 
b/content/news/2014/02/18/amazon-elastic-mapreduce-cloud-yarn.html
index 664f597..aeec5e7 100644
--- a/content/news/2014/02/18/amazon-elastic-mapreduce-cloud-yarn.html
+++ b/content/news/2014/02/18/amazon-elastic-mapreduce-cloud-yarn.html
@@ -215,7 +215,7 @@
 ssh [email protected] -i 
~/Downloads/work-laptop.pem</code></pre></div>
 
 <p>(Windows users have to follow <a 
href="http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-connect-master-node-ssh.html";>these
 instructions</a> to SSH into the machine running the master.) 
&lt;/br&gt;&lt;/br&gt;
-Once connected to the master, download and start Stratosphere for YARN: </p>
+Once connected to the master, download and start Stratosphere for YARN:</p>
 <ul>
        <li>Download and extract Stratosphere-YARN</li>
 
@@ -238,11 +238,11 @@ The arguments have the following meaning
        </ul>
 </ul>
 
-<p>Once the output has changed from </p>
+<p>Once the output has changed from</p>
 
 <div class="highlight"><pre><code class="language-bash" 
data-lang="bash">JobManager is now running on N/A:6123</code></pre></div>
 
-<p>to </p>
+<p>to</p>
 
 <div class="highlight"><pre><code class="language-bash" 
data-lang="bash">JobManager is now running on 
ip-172-31-13-68.us-west-2.compute.internal:6123</code></pre></div>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2014/11/04/release-0.7.0.html
----------------------------------------------------------------------
diff --git a/content/news/2014/11/04/release-0.7.0.html 
b/content/news/2014/11/04/release-0.7.0.html
index 8756b15..d798825 100644
--- a/content/news/2014/11/04/release-0.7.0.html
+++ b/content/news/2014/11/04/release-0.7.0.html
@@ -165,7 +165,7 @@
 
 <p><strong>Record API deprecated:</strong> The (old) Stratosphere Record API 
has been marked as deprecated and is planned for removal in the 0.9.0 
release.</p>
 
-<p><strong>BLOB service:</strong> This release contains a new service to 
distribute jar files and other binary data among the JobManager, TaskManagers 
and the client. </p>
+<p><strong>BLOB service:</strong> This release contains a new service to 
distribute jar files and other binary data among the JobManager, TaskManagers 
and the client.</p>
 
 <p><strong>Intermediate data sets:</strong> A major rewrite of the system 
internals introduces intermediate data sets as first class citizens. The 
internal state machine that tracks the distributed tasks has also been 
completely rewritten for scalability. While this is not visible as a 
user-facing feature yet, it is the foundation for several upcoming exciting 
features.</p>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2014/11/18/hadoop-compatibility.html
----------------------------------------------------------------------
diff --git a/content/news/2014/11/18/hadoop-compatibility.html 
b/content/news/2014/11/18/hadoop-compatibility.html
index bd5ab3a..0a6418b 100644
--- a/content/news/2014/11/18/hadoop-compatibility.html
+++ b/content/news/2014/11/18/hadoop-compatibility.html
@@ -153,7 +153,7 @@
 <img src="/img/blog/hcompat-logos.png" style="width:30%;margin:15px" />
 </center>
 
-<p>To close this gap, Flink provides a Hadoop Compatibility package to wrap 
functions implemented against Hadoop’s MapReduce interfaces and embed them in 
Flink programs. This package was developed as part of a <a 
href="https://developers.google.com/open-source/soc/";>Google Summer of Code</a> 
2014 project. </p>
+<p>To close this gap, Flink provides a Hadoop Compatibility package to wrap 
functions implemented against Hadoop’s MapReduce interfaces and embed them in 
Flink programs. This package was developed as part of a <a 
href="https://developers.google.com/open-source/soc/";>Google Summer of Code</a> 
2014 project.</p>
 
 <p>With the Hadoop Compatibility package, you can reuse all your Hadoop</p>
 
@@ -166,7 +166,7 @@
 
 <p>in Flink programs without changing a line of code. Moreover, Flink also 
natively supports all Hadoop data types (<code>Writables</code> and 
<code>WritableComparable</code>).</p>
 
-<p>The following code snippet shows a simple Flink WordCount program that 
solely uses Hadoop data types, InputFormat, OutputFormat, Mapper, and Reducer 
functions. </p>
+<p>The following code snippet shows a simple Flink WordCount program that 
solely uses Hadoop data types, InputFormat, OutputFormat, Mapper, and Reducer 
functions.</p>
 
 <div class="highlight"><pre><code class="language-java"><span class="c1">// 
Definition of Hadoop Mapper function</span>
 <span class="kd">public</span> <span class="kd">class</span> <span 
class="nc">Tokenizer</span> <span class="kd">implements</span> <span 
class="n">Mapper</span><span class="o">&lt;</span><span 
class="n">LongWritable</span><span class="o">,</span> <span 
class="n">Text</span><span class="o">,</span> <span class="n">Text</span><span 
class="o">,</span> <span class="n">LongWritable</span><span 
class="o">&gt;</span> <span class="o">{</span> <span class="o">...</span> <span 
class="o">}</span>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2015/01/21/release-0.8.html
----------------------------------------------------------------------
diff --git a/content/news/2015/01/21/release-0.8.html 
b/content/news/2015/01/21/release-0.8.html
index 3a3a678..65f997f 100644
--- a/content/news/2015/01/21/release-0.8.html
+++ b/content/news/2015/01/21/release-0.8.html
@@ -196,7 +196,7 @@
   <li>Stefan Bunk</li>
   <li>Paris Carbone</li>
   <li>Ufuk Celebi</li>
-  <li>Nils Engelbach </li>
+  <li>Nils Engelbach</li>
   <li>Stephan Ewen</li>
   <li>Gyula Fora</li>
   <li>Gabor Hermann</li>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2015/02/04/january-in-flink.html
----------------------------------------------------------------------
diff --git a/content/news/2015/02/04/january-in-flink.html 
b/content/news/2015/02/04/january-in-flink.html
index b953924..65ac3ed 100644
--- a/content/news/2015/02/04/january-in-flink.html
+++ b/content/news/2015/02/04/january-in-flink.html
@@ -177,7 +177,7 @@
 
 <h3 id="using-off-heap-memoryhttpsgithubcomapacheflinkpull290"><a 
href="https://github.com/apache/flink/pull/290";>Using off-heap memory</a></h3>
 
-<p>This pull request enables Flink to use off-heap memory for its internal 
memory uses (sort, hash, caching of intermediate data sets). </p>
+<p>This pull request enables Flink to use off-heap memory for its internal 
memory uses (sort, hash, caching of intermediate data sets).</p>
 
 <h3 id="gelly-flinks-graph-apihttpsgithubcomapacheflinkpull335"><a 
href="https://github.com/apache/flink/pull/335";>Gelly, Flink’s Graph 
API</a></h3>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2015/02/09/streaming-example.html
----------------------------------------------------------------------
diff --git a/content/news/2015/02/09/streaming-example.html 
b/content/news/2015/02/09/streaming-example.html
index 9964f18..75a0d07 100644
--- a/content/news/2015/02/09/streaming-example.html
+++ b/content/news/2015/02/09/streaming-example.html
@@ -181,7 +181,7 @@ found <a 
href="https://github.com/mbalassi/flink/blob/stockprices/flink-staging/
   <li>Read a socket stream of stock prices</li>
   <li>Parse the text in the stream to create a stream of 
<code>StockPrice</code> objects</li>
   <li>Add four other sources tagged with the stock symbol.</li>
-  <li>Finally, merge the streams to create a unified stream. </li>
+  <li>Finally, merge the streams to create a unified stream.</li>
 </ol>
 
 <p><img alt="Reading from multiple inputs" 
src="/img/blog/blog_multi_input.png" width="70%" class="img-responsive 
center-block" /></p>
@@ -653,7 +653,7 @@ number of mentions of a given stock in the Twitter stream. 
As both of
 these data streams are potentially infinite, we apply the join on a
 30-second window.</p>
 
-<p><img alt="Streaming joins" src="/img/blog/blog_stream_join.png" width="60%" 
class="img-responsive center-block" /> </p>
+<p><img alt="Streaming joins" src="/img/blog/blog_stream_join.png" width="60%" 
class="img-responsive center-block" /></p>
 
 <div class="codetabs">
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/e5efd40a/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
----------------------------------------------------------------------
diff --git 
a/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html 
b/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
index b8b0cd1..6eff3b3 100644
--- a/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
+++ b/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
@@ -152,7 +152,7 @@
 <p>In this blog post, we cut through Apache Flink’s layered architecture and 
take a look at its internals with a focus on how it handles joins. 
Specifically, I will</p>
 
 <ul>
-  <li>show how easy it is to join data sets using Flink’s fluent APIs, </li>
+  <li>show how easy it is to join data sets using Flink’s fluent APIs,</li>
   <li>discuss basic distributed join strategies, Flink’s join 
implementations, and its memory management,</li>
   <li>talk about Flink’s optimizer that automatically chooses join 
strategies,</li>
   <li>show some performance numbers for joining data sets of different sizes, 
and finally</li>
@@ -163,7 +163,7 @@
 
 <h3 id="how-do-i-join-with-flink">How do I join with Flink?</h3>
 
-<p>Flink provides fluent APIs in Java and Scala to write data flow programs. 
Flink’s APIs are centered around parallel data collections which are called 
data sets. data sets are processed by applying Transformations that compute new 
data sets. Flink’s transformations include Map and Reduce as known from 
MapReduce <a href="http://research.google.com/archive/mapreduce.html";>[1]</a> 
but also operators for joining, co-grouping, and iterative processing. The 
documentation gives an overview of all available transformations <a 
href="http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html";>[2]</a>.
 </p>
+<p>Flink provides fluent APIs in Java and Scala to write data flow programs. 
Flink’s APIs are centered around parallel data collections which are called 
data sets. data sets are processed by applying Transformations that compute new 
data sets. Flink’s transformations include Map and Reduce as known from 
MapReduce <a href="http://research.google.com/archive/mapreduce.html";>[1]</a> 
but also operators for joining, co-grouping, and iterative processing. The 
documentation gives an overview of all available transformations <a 
href="http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html";>[2]</a>.</p>
 
 <p>Joining two Scala case class data sets is very easy as the following 
example shows:</p>
 
@@ -200,7 +200,7 @@
 
 <ol>
   <li>The data of both inputs is distributed across all parallel instances 
that participate in the join and</li>
-  <li>each parallel instance performs a standard stand-alone join algorithm on 
its local partition of the overall data. </li>
+  <li>each parallel instance performs a standard stand-alone join algorithm on 
its local partition of the overall data.</li>
 </ol>
 
 <p>The distribution of data across parallel instances must ensure that each 
valid join pair can be locally built by exactly one instance. For both steps, 
there are multiple valid strategies that can be independently picked and which 
are favorable in different situations. In Flink terminology, the first phase is 
called Ship Strategy and the second phase Local Strategy. In the following I 
will describe Flink’s ship and local strategies to join two data sets 
<em>R</em> and <em>S</em>.</p>
@@ -219,7 +219,7 @@
 <img src="/img/blog/joins-repartition.png" style="width:90%;margin:15px" />
 </center>
 
-<p>The Broadcast-Forward strategy sends one complete data set (R) to each 
parallel instance that holds a partition of the other data set (S), i.e., each 
parallel instance receives the full data set R. Data set S remains local and is 
not shipped at all. The cost of the BF strategy depends on the size of R and 
the number of parallel instances it is shipped to. The size of S does not 
matter because S is not moved. The figure below illustrates how both ship 
strategies work. </p>
+<p>The Broadcast-Forward strategy sends one complete data set (R) to each 
parallel instance that holds a partition of the other data set (S), i.e., each 
parallel instance receives the full data set R. Data set S remains local and is 
not shipped at all. The cost of the BF strategy depends on the size of R and 
the number of parallel instances it is shipped to. The size of S does not 
matter because S is not moved. The figure below illustrates how both ship 
strategies work.</p>
 
 <center>
 <img src="/img/blog/joins-broadcast.png" style="width:90%;margin:15px" />
@@ -228,7 +228,7 @@
 <p>The Repartition-Repartition and Broadcast-Forward ship strategies establish 
suitable data distributions to execute a distributed join. Depending on the 
operations that are applied before the join, one or even both inputs of a join 
are already distributed in a suitable way across parallel instances. In this 
case, Flink will reuse such distributions and only ship one or no input at 
all.</p>
 
 <h4 id="flinks-memory-management">Flink’s Memory Management</h4>
-<p>Before delving into the details of Flink’s local join algorithms, I will 
briefly discuss Flink’s internal memory management. Data processing 
algorithms such as joining, grouping, and sorting need to hold portions of 
their input data in memory. While such algorithms perform best if there is 
enough memory available to hold all data, it is crucial to gracefully handle 
situations where the data size exceeds memory. Such situations are especially 
tricky in JVM-based systems such as Flink because the system needs to reliably 
recognize that it is short on memory. Failure to detect such situations can 
result in an <code>OutOfMemoryException</code> and kill the JVM. </p>
+<p>Before delving into the details of Flink’s local join algorithms, I will 
briefly discuss Flink’s internal memory management. Data processing 
algorithms such as joining, grouping, and sorting need to hold portions of 
their input data in memory. While such algorithms perform best if there is 
enough memory available to hold all data, it is crucial to gracefully handle 
situations where the data size exceeds memory. Such situations are especially 
tricky in JVM-based systems such as Flink because the system needs to reliably 
recognize that it is short on memory. Failure to detect such situations can 
result in an <code>OutOfMemoryException</code> and kill the JVM.</p>
 
 <p>Flink handles this challenge by actively managing its memory. When a worker 
node (TaskManager) is started, it allocates a fixed portion (70% by default) of 
the JVM’s heap memory that is available after initialization as 32KB byte 
arrays. These byte arrays are distributed as working memory to all algorithms 
that need to hold significant portions of data in memory. The algorithms 
receive their input data as Java data objects and serialize them into their 
working memory.</p>
 
@@ -245,7 +245,7 @@
 <p>After the data has been distributed across all parallel join instances 
using either a Repartition-Repartition or Broadcast-Forward ship strategy, each 
instance runs a local join algorithm to join the elements of its local 
partition. Flink’s runtime features two common join strategies to perform 
these local joins:</p>
 
 <ul>
-  <li>the <em>Sort-Merge-Join</em> strategy (SM) and </li>
+  <li>the <em>Sort-Merge-Join</em> strategy (SM) and</li>
   <li>the <em>Hybrid-Hash-Join</em> strategy (HH).</li>
 </ul>
 
@@ -290,13 +290,13 @@
 <ul>
   <li>1GB     : 1000GB</li>
   <li>10GB    : 1000GB</li>
-  <li>100GB   : 1000GB </li>
+  <li>100GB   : 1000GB</li>
   <li>1000GB  : 1000GB</li>
 </ul>
 
 <p>The Broadcast-Forward strategy is only executed for up to 10GB. Building a 
hash table from 100GB broadcasted data in 5GB working memory would result in 
spilling proximately 95GB (build input) + 950GB (probe input) in each parallel 
thread and require more than 8TB local disk storage on each machine.</p>
 
-<p>As in the single-core benchmark, we run 1:N joins, generate the data 
on-the-fly, and immediately discard the result after the join. We run the 
benchmark on 10 n1-highmem-8 Google Compute Engine instances. Each instance is 
equipped with 8 cores, 52GB RAM, 40GB of which are configured as working memory 
(5GB per core), and one local SSD for spilling to disk. All benchmarks are 
performed using the same configuration, i.e., no fine tuning for the respective 
data sizes is done. The programs are executed with a parallelism of 80. </p>
+<p>As in the single-core benchmark, we run 1:N joins, generate the data 
on-the-fly, and immediately discard the result after the join. We run the 
benchmark on 10 n1-highmem-8 Google Compute Engine instances. Each instance is 
equipped with 8 cores, 52GB RAM, 40GB of which are configured as working memory 
(5GB per core), and one local SSD for spilling to disk. All benchmarks are 
performed using the same configuration, i.e., no fine tuning for the respective 
data sizes is done. The programs are executed with a parallelism of 80.</p>
 
 <center>
 <img src="/img/blog/joins-dist-perf.png" style="width:70%;margin:15px" />
@@ -313,7 +313,7 @@
 <ul>
   <li>Flink’s fluent Scala and Java APIs make joins and other data 
transformations easy as cake.</li>
   <li>The optimizer does the hard choices for you, but gives you control in 
case you know better.</li>
-  <li>Flink’s join implementations perform very good in-memory and 
gracefully degrade when going to disk. </li>
+  <li>Flink’s join implementations perform very good in-memory and 
gracefully degrade when going to disk.</li>
   <li>Due to Flink’s robust memory management, there is no need for job- or 
data-specific memory tuning to avoid a nasty <code>OutOfMemoryException</code>. 
It just runs out-of-the-box.</li>
 </ul>
 

Reply via email to