This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new baebf9f  Publishing website 2021/12/08 13:06:16 at commit 4e3c00d
baebf9f is described below

commit baebf9fa7df134486f4070e766f691d7358df388
Author: jenkins <[email protected]>
AuthorDate: Wed Dec 8 13:06:16 2021 +0000

    Publishing website 2021/12/08 13:06:16 at commit 4e3c00d
---
 .../documentation/basics/index.html                |  70 +++++-----
 .../documentation/glossary/index.html              |   2 +-
 website/generated-content/documentation/index.xml  | 151 ++++++++++++---------
 .../documentation/programming-guide/index.html     |  14 +-
 website/generated-content/sitemap.xml              |   2 +-
 5 files changed, 133 insertions(+), 106 deletions(-)

diff --git a/website/generated-content/documentation/basics/index.html 
b/website/generated-content/documentation/basics/index.html
index d74214d..137bd68 100644
--- a/website/generated-content/documentation/basics/index.html
+++ b/website/generated-content/documentation/basics/index.html
@@ -18,7 +18,7 @@
 function addPlaceholder(){$('input:text').attr('placeholder',"What are you 
looking for?");}
 function endSearch(){var 
search=document.querySelector(".searchBar");search.classList.add("disappear");var
 icons=document.querySelector("#iconsBar");icons.classList.remove("disappear");}
 function blockScroll(){$("body").toggleClass("fixedPosition");}
-function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Documentation</span></li><li><a 
href=/documentation>Using the Documentation</a></li><li 
class=section-nav-item--collapsible><span class=section-nav-lis [...]
+function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Documentation</span></li><li><a 
href=/documentation>Using the Documentation</a></li><li 
class=section-nav-item--collapsible><span class=section-nav-lis [...]
 data-parallel processing pipelines. To get started with Beam, you&rsquo;ll 
need to
 understand an important set of core concepts:</p><ul><li><a 
href=#pipeline><em>Pipeline</em></a> - A pipeline is a user-constructed graph of
 transformations that defines the desired data processing 
operations.</li><li><a href=#pcollection><em>PCollection</em></a> - A 
<code>PCollection</code> is a data set or data
@@ -60,25 +60,7 @@ be unbounded streams of data. In Beam, most transforms apply 
equally to bounded
 and unbounded data.</p><p>You can express almost any computation that you can 
think of as a graph as a
 Beam pipeline. A Beam driver program typically starts by creating a 
<code>Pipeline</code>
 object, and then uses that object as the basis for creating the pipeline’s data
-sets and its transforms.</p><p>For more information about pipelines, see the 
following pages:</p><ul><li><a 
href=/documentation/programming-guide/#overview>Beam Programming Guide: 
Overview</a></li><li><a 
href=/documentation/programming-guide/#creating-a-pipeline>Beam Programming 
Guide: Creating a pipeline</a></li><li><a 
href=/documentation/pipelines/design-your-pipeline>Design your 
pipeline</a></li><li><a 
href=/documentation/pipeline/create-your-pipeline>Create your 
pipeline</a></li></ul [...]
-in your pipeline. A transform is usually applied to one or more input
-<code>PCollection</code> objects. Transforms that read input are an exception; 
these
-transforms might not have an input <code>PCollection</code>.</p><p>You provide 
transform processing logic in the form of a function object
-(colloquially referred to as “user code”), and your user code is applied to 
each
-element of the input PCollection (or more than one PCollection). Depending on
-the pipeline runner and backend that you choose, many different workers across 
a
-cluster might execute instances of your user code in parallel. The user code
-that runs on each worker generates the output elements that are added to zero 
or
-more output <code>PCollection</code> objects.</p><p>The Beam SDKs contain a 
number of different transforms that you can apply to
-your pipeline’s PCollections. These include general-purpose core transforms,
-such as <code>ParDo</code> or <code>Combine</code>. There are also pre-written 
composite transforms
-included in the SDKs, which combine one or more of the core transforms in a
-useful processing pattern, such as counting or combining elements in a
-collection. You can also define your own more complex composite transforms to
-fit your pipeline’s exact use case.</p><p>The following list has some common 
transform types:</p><ul><li>Source transforms such as <code>TextIO.Read</code> 
and <code>Create</code>. A source transform
-conceptually has no input.</li><li>Processing and conversion operations such 
as <code>ParDo</code>, <code>GroupByKey</code>,
-<code>CoGroupByKey</code>, <code>Combine</code>, and 
<code>Count</code>.</li><li>Outputting transforms such as 
<code>TextIO.Write</code>.</li><li>User-defined, application-specific composite 
transforms.</li></ul><p>For more information about transforms, see the 
following pages:</p><ul><li><a 
href=/documentation/programming-guide/#overview>Beam Programming Guide: 
Overview</a></li><li><a href=/documentation/programming-guide/#transforms>Beam 
Programming Guide: Transforms</a></li><li>Beam t [...]
-<a href=/documentation/transforms/python/overview/>Python</a>)</li></ul><h3 
id=pcollection>PCollection</h3><p>A <code>PCollection</code> is an unordered 
bag of elements. Each <code>PCollection</code> is a
+sets and its transforms.</p><p>For more information about pipelines, see the 
following pages:</p><ul><li><a 
href=/documentation/programming-guide/#overview>Beam Programming Guide: 
Overview</a></li><li><a 
href=/documentation/programming-guide/#creating-a-pipeline>Beam Programming 
Guide: Creating a pipeline</a></li><li><a 
href=/documentation/pipelines/design-your-pipeline>Design your 
pipeline</a></li><li><a 
href=/documentation/pipeline/create-your-pipeline>Create your 
pipeline</a></li></ul [...]
 potentially distributed, homogeneous data set or data stream, and is owned by
 the specific <code>Pipeline</code> object for which it is created. Multiple 
pipelines
 cannot share a <code>PCollection</code>. Beam pipelines process PCollections, 
and the
@@ -87,7 +69,7 @@ a single machine). Sometimes a small sample of data or an 
intermediate result
 might fit into memory on a single machine, but Beam&rsquo;s computational 
patterns and
 transforms are focused on situations where distributed data-parallel 
computation
 is required. Therefore, the elements of a <code>PCollection</code> cannot be 
processed
-individually, and are instead processed uniformly in parallel.</p><p>The 
following characteristics of a <code>PCollection</code> are important to 
know.</p><h4 id=bounded-vs-unbounded>Bounded vs unbounded</h4><p>A 
<code>PCollection</code> can be either bounded or unbounded.</p><ul><li>A 
<em>bounded</em> <code>PCollection</code> is a dataset of a known, fixed size 
(alternatively,
+individually, and are instead processed uniformly in parallel.</p><p>The 
following characteristics of a <code>PCollection</code> are important to 
know.</p><p><strong>Bounded vs. unbounded</strong>:</p><p>A 
<code>PCollection</code> can be either bounded or unbounded.</p><ul><li>A 
<em>bounded</em> <code>PCollection</code> is a dataset of a known, fixed size 
(alternatively,
 a dataset that is not growing over time). Bounded data can be processed by
 batch pipelines.</li><li>An <em>unbounded</em> <code>PCollection</code> is a 
dataset that grows over time, and the
 elements are processed as they arrive. Unbounded data must be processed by
@@ -96,24 +78,24 @@ but the two are unified in Beam and bounded and unbounded 
PCollections can
 coexist in the same pipeline. If your runner can only support bounded
 PCollections, you must reject pipelines that contain unbounded PCollections. If
 your runner is only targeting streams, there are adapters in Beam&rsquo;s 
support code
-to convert everything to APIs that target unbounded data.</p><h4 
id=timestamps>Timestamps</h4><p>Every element in a <code>PCollection</code> has 
a timestamp associated with it.</p><p>When you execute a primitive connector to 
a storage system, that connector is
+to convert everything to APIs that target unbounded 
data.</p><p><strong>Timestamps</strong>:</p><p>Every element in a 
<code>PCollection</code> has a timestamp associated with it.</p><p>When you 
execute a primitive connector to a storage system, that connector is
 responsible for providing initial timestamps. The runner must propagate and
 aggregate timestamps. If the timestamp is not important, such as with certain
 batch processing jobs where elements do not denote events, the timestamp will 
be
 the minimum representable timestamp, often referred to colloquially as 
&ldquo;negative
-infinity&rdquo;.</p><h4 id=watermarks>Watermarks</h4><p>Every 
<code>PCollection</code> must have a <a href=#watermark>watermark</a> that 
estimates how
+infinity&rdquo;.</p><p><strong>Watermarks</strong>:</p><p>Every 
<code>PCollection</code> must have a <a href=#watermark>watermark</a> that 
estimates how
 complete the <code>PCollection</code> is.</p><p>The watermark is a guess that 
&ldquo;we&rsquo;ll never see an element with an earlier
 timestamp&rdquo;. Data sources are responsible for producing a watermark. The 
runner
 must implement watermark propagation as PCollections are processed, merged, and
 partitioned.</p><p>The contents of a <code>PCollection</code> are complete 
when a watermark advances to
 &ldquo;infinity&rdquo;. In this manner, you can discover that an unbounded 
PCollection is
-finite.</p><h4 id=windowed-elements>Windowed elements</h4><p>Every element in 
a <code>PCollection</code> resides in a <a href=#window>window</a>. No element
+finite.</p><p><strong>Windowed elements</strong>:</p><p>Every element in a 
<code>PCollection</code> resides in a <a href=#window>window</a>. No element
 resides in multiple windows; two elements can be equal except for their window,
 but they are not the same.</p><p>When elements are written to the outside 
world, they are effectively placed back
 into the global window. Transforms that write data and don&rsquo;t take this
 perspective risk data loss.</p><p>A window has a maximum timestamp. When the 
watermark exceeds the maximum
 timestamp plus the user-specified allowed lateness, the window is expired. All
-data related to an expired window might be discarded at any time.</p><h4 
id=coder>Coder</h4><p>Every <code>PCollection</code> has a coder, which is a 
specification of the binary format
+data related to an expired window might be discarded at any 
time.</p><p><strong>Coder</strong>:</p><p>Every <code>PCollection</code> has a 
coder, which is a specification of the binary format
 of the elements.</p><p>In Beam, the user&rsquo;s pipeline can be written in a 
language other than the
 language of the runner. There is no expectation that the runner can actually
 deserialize user data. The Beam model operates principally on encoded data,
@@ -122,10 +104,28 @@ called a coder. A coder has a URN that identifies the 
encoding, and might have
 additional sub-coders. For example, a coder for lists might contain a coder for
 the elements of the list. Language-specific serialization techniques are
 frequently used, but there are a few common key formats (such as key-value 
pairs
-and timestamps) so the runner can understand them.</p><h4 
id=windowing-strategy>Windowing strategy</h4><p>Every <code>PCollection</code> 
has a windowing strategy, which is a specification of
+and timestamps) so the runner can understand them.</p><p><strong>Windowing 
strategy</strong>:</p><p>Every <code>PCollection</code> has a windowing 
strategy, which is a specification of
 essential information for grouping and triggering operations. The 
<code>Window</code>
 transform sets up the windowing strategy, and the <code>GroupByKey</code> 
transform has
-behavior that is governed by the windowing strategy.</p><br><p>For more 
information about PCollections, see the following page:</p><ul><li><a 
href=/documentation/programming-guide/#pcollections>Beam Programming Guide: 
PCollections</a></li></ul><h3 id=aggregation>Aggregation</h3><p>Aggregation is 
computing a value from multiple (1 or more) input elements. In
+behavior that is governed by the windowing strategy.</p><br><p>For more 
information about PCollections, see the following page:</p><ul><li><a 
href=/documentation/programming-guide/#pcollections>Beam Programming Guide: 
PCollections</a></li></ul><h3 id=ptransform>PTransform</h3><p>A 
<code>PTransform</code> (or transform) represents a data processing operation, 
or a step,
+in your pipeline. A transform is usually applied to one or more input
+<code>PCollection</code> objects. Transforms that read input are an exception; 
these
+transforms might not have an input <code>PCollection</code>.</p><p>You provide 
transform processing logic in the form of a function object
+(colloquially referred to as “user code”), and your user code is applied to 
each
+element of the input PCollection (or more than one PCollection). Depending on
+the pipeline runner and backend that you choose, many different workers across 
a
+cluster might execute instances of your user code in parallel. The user code
+that runs on each worker generates the output elements that are added to zero 
or
+more output <code>PCollection</code> objects.</p><p>The Beam SDKs contain a 
number of different transforms that you can apply to
+your pipeline’s PCollections. These include general-purpose core transforms,
+such as <code>ParDo</code> or <code>Combine</code>. There are also pre-written 
composite transforms
+included in the SDKs, which combine one or more of the core transforms in a
+useful processing pattern, such as counting or combining elements in a
+collection. You can also define your own more complex composite transforms to
+fit your pipeline’s exact use case.</p><p>The following list has some common 
transform types:</p><ul><li>Source transforms such as <code>TextIO.Read</code> 
and <code>Create</code>. A source transform
+conceptually has no input.</li><li>Processing and conversion operations such 
as <code>ParDo</code>, <code>GroupByKey</code>,
+<code>CoGroupByKey</code>, <code>Combine</code>, and 
<code>Count</code>.</li><li>Outputting transforms such as 
<code>TextIO.Write</code>.</li><li>User-defined, application-specific composite 
transforms.</li></ul><p>For more information about transforms, see the 
following pages:</p><ul><li><a 
href=/documentation/programming-guide/#overview>Beam Programming Guide: 
Overview</a></li><li><a href=/documentation/programming-guide/#transforms>Beam 
Programming Guide: Transforms</a></li><li>Beam t [...]
+<a href=/documentation/transforms/python/overview/>Python</a>)</li></ul><h3 
id=aggregation>Aggregation</h3><p>Aggregation is computing a value from 
multiple (1 or more) input elements. In
 Beam, the primary computational pattern for aggregation is to group all 
elements
 with a common key and window then combine each group of elements using an
 associative and commutative operation. This is similar to the 
&ldquo;Reduce&rdquo; operation
@@ -146,8 +146,8 @@ combined into a result for the original natural key for 
your problem. The
 associativity of your aggregation function ensures that this yields the same
 answer, but with more parallelism.</p><p>When your input is unbounded, the 
computational pattern of grouping elements by
 key and window is roughly the same, but governing when and how to emit the
-results of aggregation involves three concepts:</p><ul><li>Windowing, which 
partitions your input into bounded subsets that can be
-complete.</li><li>Watermarks, which estimate the completeness of your 
input.</li><li>Triggers, which govern when and how to emit aggregated 
results.</li></ul><p>For more information about available aggregation 
transforms, see the following
+results of aggregation involves three concepts:</p><ul><li><a 
href=#window>Windowing</a>, which partitions your input into bounded subsets 
that
+can be complete.</li><li><a href=#watermark>Watermarks</a>, which estimate the 
completeness of your input.</li><li><a href=#trigger>Triggers</a>, which govern 
when and how to emit aggregated results.</li></ul><p>For more information about 
available aggregation transforms, see the following
 pages:</p><ul><li><a 
href=/documentation/programming-guide/#core-beam-transforms>Beam Programming 
Guide: Core Beam transforms</a></li><li>Beam Transform catalog
 (<a href=/documentation/transforms/java/overview/#aggregation>Java</a>,
 <a 
href=/documentation/transforms/python/overview/#aggregation>Python</a>)</li></ul><h3
 id=user-defined-function-udf>User-defined function (UDF)</h3><p>Some Beam 
operations allow you to run user-defined code as a way to configure
@@ -228,10 +228,10 @@ must have a watermark that estimates how complete the 
<code>PCollection</code> i
 contents of a <code>PCollection</code> are complete when a watermark advances 
to
 “infinity”. In this manner, you might discover that an unbounded 
<code>PCollection</code>
 is finite. After the watermark progresses past the end of a window, any further
-element that arrives with a timestamp in that window is considered <em>late 
data</em>.</p><p>Triggers are a related concept that allow you to modify and 
refine the windowing
-strategy for a <code>PCollection</code>. You can use triggers to decide when 
each
-individual window aggregates and reports its results, including how the window
-emits late elements.</p><p>For more information about watermarks, see the 
following page:</p><ul><li><a 
href=/documentation/programming-guide/#watermarks-and-late-data>Beam 
Programming Guide: Watermarks and late data</a></li></ul><h3 
id=trigger>Trigger</h3><p>When collecting and grouping data into windows, Beam 
uses <em>triggers</em> to
+element that arrives with a timestamp in that window is considered <em>late 
data</em>.</p><p><a href=#trigger>Triggers</a> are a related concept that allow 
you to modify and refine
+the windowing strategy for a <code>PCollection</code>. You can use triggers to 
decide when
+each individual window aggregates and reports its results, including how the
+window emits late elements.</p><p>For more information about watermarks, see 
the following page:</p><ul><li><a 
href=/documentation/programming-guide/#watermarks-and-late-data>Beam 
Programming Guide: Watermarks and late data</a></li></ul><h3 
id=trigger>Trigger</h3><p>When collecting and grouping data into windows, Beam 
uses <em>triggers</em> to
 determine when to emit the aggregated results of each window (referred to as a
 <em>pane</em>). If you use Beam’s default windowing configuration and default 
trigger,
 Beam outputs the aggregated result when it estimates all data has arrived, and
@@ -302,7 +302,9 @@ checkpoint the sub-element and the runner repeats step 
2.</li></ol><p>You can al
 processing. For example, if you write a splittable <code>DoFn</code> to watch 
a set of
 directories and output filenames as they arrive, you can split to subdivide the
 work of different directories. This allows the runner to split off a hot
-directory and give it additional resources.</p><p>For more information about 
Splittable <code>DoFn</code>, see the following pages:</p><ul><li><a 
href=/documentation/programming-guide/#splittable-dofns>Splittable 
DoFns</a></li><li><a href=/blog/splittable-do-fn-is-available/>Splittable DoFn 
in Apache Beam is Ready to Use</a></li></ul><div class=feedback><p 
class=update>Last updated on 2021/12/06</p><h3>Have you found everything you 
were looking for?</h3><p class=description>Was it all us [...]
+directory and give it additional resources.</p><p>For more information about 
Splittable <code>DoFn</code>, see the following pages:</p><ul><li><a 
href=/documentation/programming-guide/#splittable-dofns>Splittable 
DoFns</a></li><li><a href=/blog/splittable-do-fn-is-available/>Splittable DoFn 
in Apache Beam is Ready to Use</a></li></ul><h3 id=whats-next>What&rsquo;s 
next</h3><p>Take a look at our <a href=/documentation/>other documention</a> 
such as the Beam
+programming guide, pipeline execution information, and transform reference
+catalogs.</p><div class=feedback><p class=update>Last updated on 
2021/12/07</p><h3>Have you found everything you were looking for?</h3><p 
class=description>Was it all useful and clear? Is there anything that you would 
like to change? Let us know!</p><button class=load-button><a 
href="mailto:[email protected]?subject=Beam Website Feedback">SEND 
FEEDBACK</a></button></div></div></div><footer class=footer><div 
class=footer__contained><div class=footer__cols><div class="footer__cols__col f 
[...]
 <a href=http://www.apache.org>The Apache Software Foundation</a>
 | <a href=/privacy_policy>Privacy Policy</a>
 | <a href=/feed.xml>RSS Feed</a><br><br>Apache Beam, Apache, Beam, the Beam 
logo, and the Apache feather logo are either registered trademarks or 
trademarks of The Apache Software Foundation. All other products or name brands 
are trademarks of their respective holders, including The Apache Software 
Foundation.</div></div></div></div></footer></body></html>
\ No newline at end of file
diff --git a/website/generated-content/documentation/glossary/index.html 
b/website/generated-content/documentation/glossary/index.html
index dc9720b..fdba25f 100644
--- a/website/generated-content/documentation/glossary/index.html
+++ b/website/generated-content/documentation/glossary/index.html
@@ -18,7 +18,7 @@
 function addPlaceholder(){$('input:text').attr('placeholder',"What are you 
looking for?");}
 function endSearch(){var 
search=document.querySelector(".searchBar");search.classList.add("disappear");var
 icons=document.querySelector("#iconsBar");icons.classList.remove("disappear");}
 function blockScroll(){$("body").toggleClass("fixedPosition");}
-function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Documentation</span></li><li><a 
href=/documentation>Using the Documentation</a></li><li 
class=section-nav-item--collapsible><span class=section-nav-lis [...]
+function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Documentation</span></li><li><a 
href=/documentation>Using the Documentation</a></li><li 
class=section-nav-item--collapsible><span class=section-nav-lis [...]
 <a href=http://www.apache.org>The Apache Software Foundation</a>
 | <a href=/privacy_policy>Privacy Policy</a>
 | <a href=/feed.xml>RSS Feed</a><br><br>Apache Beam, Apache, Beam, the Beam 
logo, and the Apache feather logo are either registered trademarks or 
trademarks of The Apache Software Foundation. All other products or name brands 
are trademarks of their respective holders, including The Apache Software 
Foundation.</div></div></div></div></footer></body></html>
\ No newline at end of file
diff --git a/website/generated-content/documentation/index.xml 
b/website/generated-content/documentation/index.xml
index 53c27a4..ed056cc 100644
--- a/website/generated-content/documentation/index.xml
+++ b/website/generated-content/documentation/index.xml
@@ -3251,41 +3251,6 @@ sets and its transforms.&lt;/p>
 &lt;li>&lt;a href="/documentation/pipelines/design-your-pipeline">Design your 
pipeline&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/documentation/pipeline/create-your-pipeline">Create your 
pipeline&lt;/a>&lt;/li>
 &lt;/ul>
-&lt;h3 id="ptransform">PTransform&lt;/h3>
-&lt;p>A &lt;code>PTransform&lt;/code> (or transform) represents a data 
processing operation, or a step,
-in your pipeline. A transform is usually applied to one or more input
-&lt;code>PCollection&lt;/code> objects. Transforms that read input are an 
exception; these
-transforms might not have an input &lt;code>PCollection&lt;/code>.&lt;/p>
-&lt;p>You provide transform processing logic in the form of a function object
-(colloquially referred to as “user code”), and your user code is applied to 
each
-element of the input PCollection (or more than one PCollection). Depending on
-the pipeline runner and backend that you choose, many different workers across 
a
-cluster might execute instances of your user code in parallel. The user code
-that runs on each worker generates the output elements that are added to zero 
or
-more output &lt;code>PCollection&lt;/code> objects.&lt;/p>
-&lt;p>The Beam SDKs contain a number of different transforms that you can 
apply to
-your pipeline’s PCollections. These include general-purpose core transforms,
-such as &lt;code>ParDo&lt;/code> or &lt;code>Combine&lt;/code>. There are also 
pre-written composite transforms
-included in the SDKs, which combine one or more of the core transforms in a
-useful processing pattern, such as counting or combining elements in a
-collection. You can also define your own more complex composite transforms to
-fit your pipeline’s exact use case.&lt;/p>
-&lt;p>The following list has some common transform types:&lt;/p>
-&lt;ul>
-&lt;li>Source transforms such as &lt;code>TextIO.Read&lt;/code> and 
&lt;code>Create&lt;/code>. A source transform
-conceptually has no input.&lt;/li>
-&lt;li>Processing and conversion operations such as &lt;code>ParDo&lt;/code>, 
&lt;code>GroupByKey&lt;/code>,
-&lt;code>CoGroupByKey&lt;/code>, &lt;code>Combine&lt;/code>, and 
&lt;code>Count&lt;/code>.&lt;/li>
-&lt;li>Outputting transforms such as &lt;code>TextIO.Write&lt;/code>.&lt;/li>
-&lt;li>User-defined, application-specific composite transforms.&lt;/li>
-&lt;/ul>
-&lt;p>For more information about transforms, see the following pages:&lt;/p>
-&lt;ul>
-&lt;li>&lt;a href="/documentation/programming-guide/#overview">Beam 
Programming Guide: Overview&lt;/a>&lt;/li>
-&lt;li>&lt;a href="/documentation/programming-guide/#transforms">Beam 
Programming Guide: Transforms&lt;/a>&lt;/li>
-&lt;li>Beam transform catalog (&lt;a 
href="/documentation/transforms/java/overview/">Java&lt;/a>,
-&lt;a href="/documentation/transforms/python/overview/">Python&lt;/a>)&lt;/li>
-&lt;/ul>
 &lt;h3 id="pcollection">PCollection&lt;/h3>
 &lt;p>A &lt;code>PCollection&lt;/code> is an unordered bag of elements. Each 
&lt;code>PCollection&lt;/code> is a
 potentially distributed, homogeneous data set or data stream, and is owned by
@@ -3299,7 +3264,7 @@ transforms are focused on situations where distributed 
data-parallel computation
 is required. Therefore, the elements of a &lt;code>PCollection&lt;/code> 
cannot be processed
 individually, and are instead processed uniformly in parallel.&lt;/p>
 &lt;p>The following characteristics of a &lt;code>PCollection&lt;/code> are 
important to know.&lt;/p>
-&lt;h4 id="bounded-vs-unbounded">Bounded vs unbounded&lt;/h4>
+&lt;p>&lt;strong>Bounded vs. unbounded&lt;/strong>:&lt;/p>
 &lt;p>A &lt;code>PCollection&lt;/code> can be either bounded or 
unbounded.&lt;/p>
 &lt;ul>
 &lt;li>A &lt;em>bounded&lt;/em> &lt;code>PCollection&lt;/code> is a dataset of 
a known, fixed size (alternatively,
@@ -3315,7 +3280,7 @@ coexist in the same pipeline. If your runner can only 
support bounded
 PCollections, you must reject pipelines that contain unbounded PCollections. If
 your runner is only targeting streams, there are adapters in Beam&amp;rsquo;s 
support code
 to convert everything to APIs that target unbounded data.&lt;/p>
-&lt;h4 id="timestamps">Timestamps&lt;/h4>
+&lt;p>&lt;strong>Timestamps&lt;/strong>:&lt;/p>
 &lt;p>Every element in a &lt;code>PCollection&lt;/code> has a timestamp 
associated with it.&lt;/p>
 &lt;p>When you execute a primitive connector to a storage system, that 
connector is
 responsible for providing initial timestamps. The runner must propagate and
@@ -3323,7 +3288,7 @@ aggregate timestamps. If the timestamp is not important, 
such as with certain
 batch processing jobs where elements do not denote events, the timestamp will 
be
 the minimum representable timestamp, often referred to colloquially as 
&amp;ldquo;negative
 infinity&amp;rdquo;.&lt;/p>
-&lt;h4 id="watermarks">Watermarks&lt;/h4>
+&lt;p>&lt;strong>Watermarks&lt;/strong>:&lt;/p>
 &lt;p>Every &lt;code>PCollection&lt;/code> must have a &lt;a 
href="#watermark">watermark&lt;/a> that estimates how
 complete the &lt;code>PCollection&lt;/code> is.&lt;/p>
 &lt;p>The watermark is a guess that &amp;ldquo;we&amp;rsquo;ll never see an 
element with an earlier
@@ -3333,7 +3298,7 @@ partitioned.&lt;/p>
 &lt;p>The contents of a &lt;code>PCollection&lt;/code> are complete when a 
watermark advances to
 &amp;ldquo;infinity&amp;rdquo;. In this manner, you can discover that an 
unbounded PCollection is
 finite.&lt;/p>
-&lt;h4 id="windowed-elements">Windowed elements&lt;/h4>
+&lt;p>&lt;strong>Windowed elements&lt;/strong>:&lt;/p>
 &lt;p>Every element in a &lt;code>PCollection&lt;/code> resides in a &lt;a 
href="#window">window&lt;/a>. No element
 resides in multiple windows; two elements can be equal except for their window,
 but they are not the same.&lt;/p>
@@ -3343,7 +3308,7 @@ perspective risk data loss.&lt;/p>
 &lt;p>A window has a maximum timestamp. When the watermark exceeds the maximum
 timestamp plus the user-specified allowed lateness, the window is expired. All
 data related to an expired window might be discarded at any time.&lt;/p>
-&lt;h4 id="coder">Coder&lt;/h4>
+&lt;p>&lt;strong>Coder&lt;/strong>:&lt;/p>
 &lt;p>Every &lt;code>PCollection&lt;/code> has a coder, which is a 
specification of the binary format
 of the elements.&lt;/p>
 &lt;p>In Beam, the user&amp;rsquo;s pipeline can be written in a language 
other than the
@@ -3355,7 +3320,7 @@ additional sub-coders. For example, a coder for lists 
might contain a coder for
 the elements of the list. Language-specific serialization techniques are
 frequently used, but there are a few common key formats (such as key-value 
pairs
 and timestamps) so the runner can understand them.&lt;/p>
-&lt;h4 id="windowing-strategy">Windowing strategy&lt;/h4>
+&lt;p>&lt;strong>Windowing strategy&lt;/strong>:&lt;/p>
 &lt;p>Every &lt;code>PCollection&lt;/code> has a windowing strategy, which is 
a specification of
 essential information for grouping and triggering operations. The 
&lt;code>Window&lt;/code>
 transform sets up the windowing strategy, and the 
&lt;code>GroupByKey&lt;/code> transform has
@@ -3365,6 +3330,41 @@ behavior that is governed by the windowing 
strategy.&lt;/p>
 &lt;ul>
 &lt;li>&lt;a href="/documentation/programming-guide/#pcollections">Beam 
Programming Guide: PCollections&lt;/a>&lt;/li>
 &lt;/ul>
+&lt;h3 id="ptransform">PTransform&lt;/h3>
+&lt;p>A &lt;code>PTransform&lt;/code> (or transform) represents a data 
processing operation, or a step,
+in your pipeline. A transform is usually applied to one or more input
+&lt;code>PCollection&lt;/code> objects. Transforms that read input are an 
exception; these
+transforms might not have an input &lt;code>PCollection&lt;/code>.&lt;/p>
+&lt;p>You provide transform processing logic in the form of a function object
+(colloquially referred to as “user code”), and your user code is applied to 
each
+element of the input PCollection (or more than one PCollection). Depending on
+the pipeline runner and backend that you choose, many different workers across 
a
+cluster might execute instances of your user code in parallel. The user code
+that runs on each worker generates the output elements that are added to zero 
or
+more output &lt;code>PCollection&lt;/code> objects.&lt;/p>
+&lt;p>The Beam SDKs contain a number of different transforms that you can 
apply to
+your pipeline’s PCollections. These include general-purpose core transforms,
+such as &lt;code>ParDo&lt;/code> or &lt;code>Combine&lt;/code>. There are also 
pre-written composite transforms
+included in the SDKs, which combine one or more of the core transforms in a
+useful processing pattern, such as counting or combining elements in a
+collection. You can also define your own more complex composite transforms to
+fit your pipeline’s exact use case.&lt;/p>
+&lt;p>The following list has some common transform types:&lt;/p>
+&lt;ul>
+&lt;li>Source transforms such as &lt;code>TextIO.Read&lt;/code> and 
&lt;code>Create&lt;/code>. A source transform
+conceptually has no input.&lt;/li>
+&lt;li>Processing and conversion operations such as &lt;code>ParDo&lt;/code>, 
&lt;code>GroupByKey&lt;/code>,
+&lt;code>CoGroupByKey&lt;/code>, &lt;code>Combine&lt;/code>, and 
&lt;code>Count&lt;/code>.&lt;/li>
+&lt;li>Outputting transforms such as &lt;code>TextIO.Write&lt;/code>.&lt;/li>
+&lt;li>User-defined, application-specific composite transforms.&lt;/li>
+&lt;/ul>
+&lt;p>For more information about transforms, see the following pages:&lt;/p>
+&lt;ul>
+&lt;li>&lt;a href="/documentation/programming-guide/#overview">Beam 
Programming Guide: Overview&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/programming-guide/#transforms">Beam 
Programming Guide: Transforms&lt;/a>&lt;/li>
+&lt;li>Beam transform catalog (&lt;a 
href="/documentation/transforms/java/overview/">Java&lt;/a>,
+&lt;a href="/documentation/transforms/python/overview/">Python&lt;/a>)&lt;/li>
+&lt;/ul>
 &lt;h3 id="aggregation">Aggregation&lt;/h3>
 &lt;p>Aggregation is computing a value from multiple (1 or more) input 
elements. In
 Beam, the primary computational pattern for aggregation is to group all 
elements
@@ -3395,10 +3395,10 @@ answer, but with more parallelism.&lt;/p>
 key and window is roughly the same, but governing when and how to emit the
 results of aggregation involves three concepts:&lt;/p>
 &lt;ul>
-&lt;li>Windowing, which partitions your input into bounded subsets that can be
-complete.&lt;/li>
-&lt;li>Watermarks, which estimate the completeness of your input.&lt;/li>
-&lt;li>Triggers, which govern when and how to emit aggregated results.&lt;/li>
+&lt;li>&lt;a href="#window">Windowing&lt;/a>, which partitions your input into 
bounded subsets that
+can be complete.&lt;/li>
+&lt;li>&lt;a href="#watermark">Watermarks&lt;/a>, which estimate the 
completeness of your input.&lt;/li>
+&lt;li>&lt;a href="#trigger">Triggers&lt;/a>, which govern when and how to 
emit aggregated results.&lt;/li>
 &lt;/ul>
 &lt;p>For more information about available aggregation transforms, see the 
following
 pages:&lt;/p>
@@ -3548,10 +3548,10 @@ contents of a &lt;code>PCollection&lt;/code> are 
complete when a watermark advan
 “infinity”. In this manner, you might discover that an unbounded 
&lt;code>PCollection&lt;/code>
 is finite. After the watermark progresses past the end of a window, any further
 element that arrives with a timestamp in that window is considered &lt;em>late 
data&lt;/em>.&lt;/p>
-&lt;p>Triggers are a related concept that allow you to modify and refine the 
windowing
-strategy for a &lt;code>PCollection&lt;/code>. You can use triggers to decide 
when each
-individual window aggregates and reports its results, including how the window
-emits late elements.&lt;/p>
+&lt;p>&lt;a href="#trigger">Triggers&lt;/a> are a related concept that allow 
you to modify and refine
+the windowing strategy for a &lt;code>PCollection&lt;/code>. You can use 
triggers to decide when
+each individual window aggregates and reports its results, including how the
+window emits late elements.&lt;/p>
 &lt;p>For more information about watermarks, see the following page:&lt;/p>
 &lt;ul>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#watermarks-and-late-data">Beam 
Programming Guide: Watermarks and late data&lt;/a>&lt;/li>
@@ -3695,7 +3695,11 @@ directory and give it additional resources.&lt;/p>
 &lt;ul>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#splittable-dofns">Splittable 
DoFns&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/blog/splittable-do-fn-is-available/">Splittable DoFn in 
Apache Beam is Ready to Use&lt;/a>&lt;/li>
-&lt;/ul></description></item><item><title>Documentation: Beam 
glossary</title><link>/documentation/glossary/</link><pubDate>Mon, 01 Jan 0001 
00:00:00 +0000</pubDate><guid>/documentation/glossary/</guid><description>
+&lt;/ul>
+&lt;h3 id="whats-next">What&amp;rsquo;s next&lt;/h3>
+&lt;p>Take a look at our &lt;a href="/documentation/">other documention&lt;/a> 
such as the Beam
+programming guide, pipeline execution information, and transform reference
+catalogs.&lt;/p></description></item><item><title>Documentation: Beam 
glossary</title><link>/documentation/glossary/</link><pubDate>Mon, 01 Jan 0001 
00:00:00 +0000</pubDate><guid>/documentation/glossary/</guid><description>
 &lt;!--
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
@@ -3715,6 +3719,10 @@ limitations under the License.
 &lt;li>&lt;a href="/documentation/transforms/java/overview/#aggregation">Java 
Transform catalog&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/transforms/python/overview/#aggregation">Python Transform 
catalog&lt;/a>&lt;/li>
 &lt;/ul>
+&lt;p>To learn more, see:&lt;/p>
+&lt;ul>
+&lt;li>&lt;a href="/documentation/basics/#aggregation">Basics of the Beam 
model: Aggregation&lt;/a>&lt;/li>
+&lt;/ul>
 &lt;h2 id="apply">Apply&lt;/h2>
 &lt;p>A method for invoking a transform on an input PCollection (or set of 
PCollections) to produce one or more output PCollections. The 
&lt;code>apply&lt;/code> method is attached to the PCollection (or value). 
Invoking multiple Beam transforms is similar to method chaining, but with a 
difference: You apply the transform to the input PCollection, passing the 
transform itself as an argument, and the operation returns the output 
PCollection. Because of Beam’s deferred execution model, app [...]
 &lt;p>To learn more, see:&lt;/p>
@@ -3899,7 +3907,8 @@ limitations under the License.
 &lt;p>A potentially distributed, homogeneous dataset or data stream. 
PCollections represent data in a Beam pipeline, and Beam transforms 
(PTransforms) use PCollection objects as inputs and outputs. PCollections are 
intended to be immutable, meaning that once a PCollection is created, you can’t 
add, remove, or change individual elements. The “P” stands for 
“parallel.”&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/programming-guide/#pcollections">PCollections&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#pcollection">Basics of the Beam 
model: PCollection&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/programming-guide/#pcollections">Programming 
guide: PCollections&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="pipe-operator-">Pipe operator (&lt;code>|&lt;/code>)&lt;/h2>
 &lt;p>Delimits a step in a Python pipeline. For example: &lt;code>[Final 
Output PCollection] = ([Initial Input PCollection] | [First Transform] | 
[Second Transform] | [Third Transform])&lt;/code>. The output of each transform 
is passed from left to right as input to the next transform. The pipe operator 
in Python is equivalent to the &lt;code>apply&lt;/code> method in Java (in 
other words, the pipe applies a transform to a PCollection), and usage is 
similar to the pipe operator in shell  [...]
@@ -3911,6 +3920,7 @@ limitations under the License.
 &lt;p>An encapsulation of your entire data processing task, including reading 
input data from a source, transforming that data, and writing output data to a 
sink. You can think of a pipeline as a Beam program that uses PTransforms to 
process PCollections. (Alternatively, you can think of it as a single, 
executable composite PTransform with no inputs or outputs.) The transforms in a 
pipeline can be represented as a directed acyclic graph (DAG). All Beam driver 
programs must create a pipel [...]
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
+&lt;li>&lt;a href="/documentation/basics/#pipeline">Basics of the Beam model: 
Pipeline&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#overview">Overview&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#creating-a-pipeline">Creating a 
pipeline&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/documentation/pipelines/design-your-pipeline/">Design your 
pipeline&lt;/a>&lt;/li>
@@ -3927,6 +3937,7 @@ limitations under the License.
 &lt;p>A data processing operation, or a step, in your pipeline. A PTransform 
takes zero or more PCollections as input, applies a processing function to the 
elements of that PCollection, and produces zero or more output PCollections. 
Some PTransforms accept user-defined functions that apply custom logic. The “P” 
stands for “parallel.”&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
+&lt;li>&lt;a href="/documentation/basics/#ptransform">Basics of the Beam 
model: PTransform&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#overview">Overview&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#transforms">Transforms&lt;/a>&lt;/li>
 &lt;/ul>
@@ -3940,6 +3951,7 @@ limitations under the License.
 &lt;p>A runner runs a pipeline on a specific platform. Most runners are 
translators or adapters to massively parallel big data processing systems. 
Other runners exist for local testing and debugging. Among the supported 
runners are Google Cloud Dataflow, Apache Spark, Apache Samza, Apache Flink, 
the Interactive Runner, and the Direct Runner.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
+&lt;li>&lt;a href="/documentation/basics/#runner">Basics of the Beam model: 
Runner&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/documentation/#choosing-a-runner">Choosing a 
Runner&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/documentation/runners/capability-matrix/">Beam Capability 
Matrix&lt;/a>&lt;/li>
 &lt;/ul>
@@ -3947,7 +3959,8 @@ limitations under the License.
 &lt;p>A language-independent type definition for the elements of a 
PCollection. The schema for a PCollection defines elements of that PCollection 
as an ordered list of named fields. Each field has a name, a type, and possibly 
a set of user options. Schemas provide a way to reason about types across 
different programming-language APIs. They also let you describe data 
transformations more succinctly and at a higher level.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/programming-guide/#schemas">Schemas&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#schema">Basics of the Beam model: 
Schema&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/programming-guide/#schemas">Programming 
guide: Schemas&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/documentation/patterns/schema/">Schema 
Patterns&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="session">Session&lt;/h2>
@@ -3984,7 +3997,8 @@ limitations under the License.
 &lt;p>A generalization of DoFn that makes it easier to create complex, modular 
I/O connectors. A Splittable DoFn (SDF) can process elements in a 
non-monolithic way, meaning that the processing can be decomposed into smaller 
tasks. With SDF, you can check-point the processing of an element, and you can 
split the remaining work to yield additional parallelism. SDF is recommended 
for building new I/O connectors.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/programming-guide/#splittable-dofns">Splittable 
DoFns&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#splittable-dofn">Basics of the Beam 
model: Splittable DoFn&lt;/a>&lt;/li>
+&lt;li>&lt;a 
href="/documentation/programming-guide/#splittable-dofns">Programming guide: 
Splittable DoFns&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/blog/splittable-do-fn-is-available/">Splittable DoFn in 
Apache Beam is Ready to Use&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="stage">Stage&lt;/h2>
@@ -3993,7 +4007,8 @@ limitations under the License.
 &lt;p>Persistent values that a PTransform can access. The state API lets you 
augment element-wise operations (for example, ParDo or Map) with mutable state. 
Using the state API, you can read from, and write to, state as you process each 
element of a PCollection. You can use the state API together with the timer API 
to create processing tasks that give you fine-grained control over the 
workflow. State is always local to a key and window.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a href="/documentation/programming-guide/#state-and-timers">State 
and Timers&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#state-and-timers">Basics of the Beam 
model: State and timers&lt;/a>&lt;/li>
+&lt;li>&lt;a 
href="/documentation/programming-guide/#state-and-timers">Programming guide: 
State and Timers&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/blog/stateful-processing/">Stateful processing with Apache 
Beam&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="streaming">Streaming&lt;/h2>
@@ -4007,7 +4022,8 @@ limitations under the License.
 &lt;p>A Beam feature that enables delayed processing of data stored using the 
state API. The timer API lets you set timers to call back at either an 
event-time or a processing-time timestamp. You can use the timer API together 
with the state API to create processing tasks that give you fine-grained 
control over the workflow.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a href="/documentation/programming-guide/#state-and-timers">State 
and Timers&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#state-and-timers">Basics of the Beam 
model: State and timers&lt;/a>&lt;/li>
+&lt;li>&lt;a 
href="/documentation/programming-guide/#state-and-timers">Programming guide: 
State and Timers&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/blog/stateful-processing/">Stateful processing with Apache 
Beam&lt;/a>&lt;/li>
 &lt;li>&lt;a href="/blog/timely-processing/">Timely (and Stateful) Processing 
with Apache Beam&lt;/a>&lt;/li>
 &lt;/ul>
@@ -4015,6 +4031,7 @@ limitations under the License.
 &lt;p>A point in event time associated with an element in a PCollection and 
used to assign a window to the element. The source that creates the PCollection 
assigns each element an initial timestamp, often corresponding to when the 
element was read or added. But you can also manually assign timestamps. This 
can be useful if elements have an inherent timestamp, but the timestamp is 
somewhere in the structure of the element itself (for example, a time field in 
a server log entry).&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
+&lt;li>&lt;a href="/documentation/basics/#timestamp">Basics of the Beam model: 
Timestamp&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#element-timestamps">Element 
timestamps&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#adding-timestamps-to-a-pcollections-elements">Adding
 timestamps to a PCollection’s elements&lt;/a>&lt;/li>
 &lt;/ul>
@@ -4024,7 +4041,8 @@ limitations under the License.
 &lt;p>Determines when to emit aggregated result data from a window. You can 
use triggers to refine the windowing strategy for your pipeline. If you use the 
default windowing configuration and default trigger, Beam outputs an aggregated 
result when it estimates that all data for a window has arrived, and it 
discards all subsequent data for that window. But you can also use triggers to 
emit early results, before all the data in a given window has arrived, or to 
process late data by trigger [...]
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/programming-guide/#triggers">Triggers&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#trigger">Basics of the Beam model: 
Trigger&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/programming-guide/#triggers">Programming 
guide: Triggers&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="unbounded-data">Unbounded data&lt;/h2>
 &lt;p>A dataset that grows over time, with elements processed as they arrive. 
A PCollection can be bounded or unbounded, depending on the source of the data 
that it represents. Reading from a streaming or continuously-updating data 
source, such as Pub/Sub or Kafka, typically creates an unbounded 
PCollection.&lt;/p>
@@ -4036,7 +4054,7 @@ limitations under the License.
 &lt;p>Custom logic that a PTransform applies to your data. Some PTransforms 
accept a user-defined function (UDF) as a way to configure the transform. For 
example, ParDo expects user code in the form of a DoFn object. Each language 
SDK has its own idiomatic way of expressing user-defined functions, but there 
are some common requirements, like serializability and thread 
compatibility.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/basics/#user-defined-functions-udfs">User-Defined 
Functions (UDFs)&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#user-defined-functions-udfs">Basics 
of the Beam model: User-Defined Functions (UDFs)&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#pardo">ParDo&lt;/a>&lt;/li>
 &lt;li>&lt;a 
href="/documentation/programming-guide/#requirements-for-writing-user-code-for-beam-transforms">Requirements
 for writing user code for Beam transforms&lt;/a>&lt;/li>
 &lt;/ul>
@@ -4044,13 +4062,15 @@ limitations under the License.
 &lt;p>An estimate on the lower bound of the timestamps that will be seen (in 
the future) at this point of the pipeline. Watermarks provide a way to estimate 
the completeness of input data. Every PCollection has an associated watermark. 
Once the watermark progresses past the end of a window, any element that 
arrives with a timestamp in that window is considered late data.&lt;/p>
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/programming-guide/#watermarks-and-late-data">Watermarks 
and late data&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#watermark">Basics of the Beam model: 
Watermark&lt;/a>&lt;/li>
+&lt;li>&lt;a 
href="/documentation/programming-guide/#watermarks-and-late-data">Programming 
guide: Watermarks and late data&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="windowing">Windowing&lt;/h2>
 &lt;p>Partitioning a PCollection into bounded subsets grouped by the 
timestamps of individual elements. In the Beam model, any PCollection – 
including unbounded PCollections – can be subdivided into logical windows. Each 
element in a PCollection is assigned to one or more windows according to the 
PCollection&amp;rsquo;s windowing function, and each individual window contains 
a finite number of elements. Transforms that aggregate multiple elements, such 
as GroupByKey and Combine, work imp [...]
 &lt;p>To learn more, see:&lt;/p>
 &lt;ul>
-&lt;li>&lt;a 
href="/documentation/programming-guide/#windowing">Windowing&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/basics/#window">Basics of the Beam model: 
Window&lt;/a>&lt;/li>
+&lt;li>&lt;a href="/documentation/programming-guide/#windowing">Programming 
guide: Windowing&lt;/a>&lt;/li>
 &lt;/ul>
 &lt;h2 id="worker">Worker&lt;/h2>
 &lt;p>A container, process, or virtual machine (VM) that handles some part of 
the parallel processing of a pipeline. Each worker node has its own independent 
copy of state. A Beam runner might serialize elements between machines for 
communication purposes and for other reasons such as persistence.&lt;/p>
@@ -4072,11 +4092,14 @@ limitations under the License.
 &lt;h1 id="apache-beam-programming-guide">Apache Beam Programming Guide&lt;/h1>
 &lt;p>The &lt;strong>Beam Programming Guide&lt;/strong> is intended for Beam 
users who want to use the
 Beam SDKs to create data processing pipelines. It provides guidance for using
-the Beam SDK classes to build and test your pipeline. It is not intended as an
-exhaustive reference, but as a language-agnostic, high-level guide to
-programmatically building your Beam pipeline. As the programming guide is 
filled
-out, the text will include code samples in multiple languages to help 
illustrate
-how to implement Beam concepts in your pipelines.&lt;/p>
+the Beam SDK classes to build and test your pipeline. The programming guide is
+not intended as an exhaustive reference, but as a language-agnostic, high-level
+guide to programmatically building your Beam pipeline. As the programming guide
+is filled out, the text will include code samples in multiple languages to help
+illustrate how to implement Beam concepts in your pipelines.&lt;/p>
+&lt;p>If you want a brief introduction to Beam&amp;rsquo;s basic concepts 
before reading the
+programming guide, take a look at the
+&lt;a href="/documentation/basics/">Basics of the Beam model&lt;/a> 
page.&lt;/p>
 &lt;nav class="language-switcher">
 &lt;strong>Adapt for:&lt;/strong>
 &lt;ul>
diff --git 
a/website/generated-content/documentation/programming-guide/index.html 
b/website/generated-content/documentation/programming-guide/index.html
index 82acce2..d27b3d1 100644
--- a/website/generated-content/documentation/programming-guide/index.html
+++ b/website/generated-content/documentation/programming-guide/index.html
@@ -20,11 +20,13 @@ function endSearch(){var 
search=document.querySelector(".searchBar");search.clas
 function blockScroll(){$("body").toggleClass("fixedPosition");}
 function openMenu(){addPlaceholder();blockScroll();}</script><div 
class="clearfix container-main-content"><div class="section-nav closed" 
data-offset-top=90 data-offset-bottom=500><span class="section-nav-back 
glyphicon glyphicon-menu-left"></span><nav><ul class=section-nav-list 
data-section-nav><li><span 
class=section-nav-list-main-title>Documentation</span></li><li><a 
href=/documentation>Using the Documentation</a></li><li 
class=section-nav-item--collapsible><span class=section-nav-lis [...]
 Beam SDKs to create data processing pipelines. It provides guidance for using
-the Beam SDK classes to build and test your pipeline. It is not intended as an
-exhaustive reference, but as a language-agnostic, high-level guide to
-programmatically building your Beam pipeline. As the programming guide is 
filled
-out, the text will include code samples in multiple languages to help 
illustrate
-how to implement Beam concepts in your pipelines.</p><nav 
class=language-switcher><strong>Adapt for:</strong><ul><li 
data-type=language-java class=active>Java SDK</li><li 
data-type=language-py>Python SDK</li><li data-type=language-go>Go 
SDK</li></ul></nav><p class=language-py>The Python SDK supports Python 3.6, 
3.7, and 3.8. Beam 2.24.0 was the last Python SDK release to support Python 2 
and 3.5.</p><p class=language-go>The Go SDK supports Go v1.16+. SDK release 
2.32.0 is the last experi [...]
+the Beam SDK classes to build and test your pipeline. The programming guide is
+not intended as an exhaustive reference, but as a language-agnostic, high-level
+guide to programmatically building your Beam pipeline. As the programming guide
+is filled out, the text will include code samples in multiple languages to help
+illustrate how to implement Beam concepts in your pipelines.</p><p>If you want 
a brief introduction to Beam&rsquo;s basic concepts before reading the
+programming guide, take a look at the
+<a href=/documentation/basics/>Basics of the Beam model</a> page.</p><nav 
class=language-switcher><strong>Adapt for:</strong><ul><li 
data-type=language-java class=active>Java SDK</li><li 
data-type=language-py>Python SDK</li><li data-type=language-go>Go 
SDK</li></ul></nav><p class=language-py>The Python SDK supports Python 3.6, 
3.7, and 3.8. Beam 2.24.0 was the last Python SDK release to support Python 2 
and 3.5.</p><p class=language-go>The Go SDK supports Go v1.16+. SDK release 
2.32.0 is [...]
 of the Beam SDKs. Your driver program <em>defines</em> your pipeline, 
including all of
 the inputs, transforms, and outputs; it also sets execution options for your
 pipeline (typically passed in using command-line options). These include the
@@ -4310,7 +4312,7 @@ expansionAddr := &#34;localhost:8097&#34;
 outT := beam.UnnamedOutput(typex.New(reflectx.String))
 res := beam.CrossLanguage(s, urn, payload, expansionAddr, 
beam.UnnamedInput(inputPCol), outT)
    </code></pre></div></div></li><li><p>After the job has been submitted to 
the Beam runner, shutdown the expansion service by
-terminating the expansion service process.</p></li></ol><h3 
id=x-lang-transform-runner-support>13.3. Runner Support</h3><p>Currently, 
portable runners such as Flink, Spark, and the Direct runner can be used with 
multi-language pipelines.</p><p>Google Cloud Dataflow supports multi-language 
pipelines through the Dataflow Runner v2 backend architecture.</p><div 
class=feedback><p class=update>Last updated on 2021/12/06</p><h3>Have you found 
everything you were looking for?</h3><p class=descr [...]
+terminating the expansion service process.</p></li></ol><h3 
id=x-lang-transform-runner-support>13.3. Runner Support</h3><p>Currently, 
portable runners such as Flink, Spark, and the Direct runner can be used with 
multi-language pipelines.</p><p>Google Cloud Dataflow supports multi-language 
pipelines through the Dataflow Runner v2 backend architecture.</p><div 
class=feedback><p class=update>Last updated on 2021/12/07</p><h3>Have you found 
everything you were looking for?</h3><p class=descr [...]
 <a href=http://www.apache.org>The Apache Software Foundation</a>
 | <a href=/privacy_policy>Privacy Policy</a>
 | <a href=/feed.xml>RSS Feed</a><br><br>Apache Beam, Apache, Beam, the Beam 
logo, and the Apache feather logo are either registered trademarks or 
trademarks of The Apache Software Foundation. All other products or name brands 
are trademarks of their respective holders, including The Apache Software 
Foundation.</div></div></div></div></footer></body></html>
\ No newline at end of file
diff --git a/website/generated-content/sitemap.xml 
b/website/generated-content/sitemap.xml
index bd28479..0e909c8 100644
--- a/website/generated-content/sitemap.xml
+++ b/website/generated-content/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/blog/beam-2.34.0/</loc><lastmod>2021-11-11T11:07:06-08:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2021-11-11T11:07:06-08:00</lastmod></url><url><loc>/blog/</loc><lastmod>2021-11-11T11:07:06-08:00</lastmod></url><url><loc>/categories/</loc><lastmod>2021-12-01T21:32:04+03:00</lastmod></url><url><loc>/blog/g
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/blog/beam-2.34.0/</loc><lastmod>2021-11-11T11:07:06-08:00</lastmod></url><url><loc>/categories/blog/</loc><lastmod>2021-11-11T11:07:06-08:00</lastmod></url><url><loc>/blog/</loc><lastmod>2021-11-11T11:07:06-08:00</lastmod></url><url><loc>/categories/</loc><lastmod>2021-12-01T21:32:04+03:00</lastmod></url><url><loc>/blog/g
 [...]
\ No newline at end of file

Reply via email to