[beam-site] 02/03: Explicitly define section id due to kramdown id generation changes

2018-03-02 Thread mergebot-role
This is an automated email from the ASF dual-hosted git repository.

mergebot-role pushed a commit to branch mergebot
in repository https://gitbox.apache.org/repos/asf/beam-site.git

commit d64339ef6412ce2d144026c74537932282ce6da2
Author: melissa 
AuthorDate: Tue Feb 20 14:18:14 2018 -0800

Explicitly define section id due to kramdown id generation changes
---
 src/documentation/programming-guide.md | 195 -
 1 file changed, 95 insertions(+), 100 deletions(-)

diff --git a/src/documentation/programming-guide.md 
b/src/documentation/programming-guide.md
index 7f6aea5..6b86743 100644
--- a/src/documentation/programming-guide.md
+++ b/src/documentation/programming-guide.md
@@ -26,12 +26,7 @@ how to implement Beam concepts in your pipelines.
   
 
 
-**Table of Contents:**
-* TOC
-{:toc}
-
-
-## 1. Overview
+## 1. Overview {#overview}
 
 To use Beam, you need to first create a driver program using the classes in one
 of the Beam SDKs. Your driver program *defines* your pipeline, including all of
@@ -94,7 +89,7 @@ objects you've created and transforms that you've applied. 
That graph is then
 executed using the appropriate distributed processing back-end, becoming an
 asynchronous "job" (or equivalent) on that back-end.
 
-## 2. Creating a pipeline
+## 2. Creating a pipeline {#creating-a-pipeline}
 
 The `Pipeline` abstraction encapsulates all the data and steps in your data
 processing task. Your Beam driver program typically starts by constructing a
@@ -122,7 +117,7 @@ Pipeline p = Pipeline.create(options);
 %}
 ```
 
-### 2.1. Configuring pipeline options
+### 2.1. Configuring pipeline options {#configuring-pipeline-options}
 
 Use the pipeline options to configure different aspects of your pipeline, such
 as the pipeline runner that will execute your pipeline and any runner-specific
@@ -134,7 +129,7 @@ When you run the pipeline on a runner of your choice, a 
copy of the
 PipelineOptions will be available to your code. For example, you can read
 PipelineOptions from a DoFn's Context.
 
- 2.1.1. Setting PipelineOptions from command-line arguments
+ 2.1.1. Setting PipelineOptions from command-line arguments 
{#pipeline-options-cli}
 
 While you can configure your pipeline by creating a `PipelineOptions` object 
and
 setting the fields directly, the Beam SDKs include a command-line parser that
@@ -167,7 +162,7 @@ a command-line argument.
 > demonstrates how to set pipeline options at runtime by using command-line
 > options.
 
- 2.1.2. Creating custom options
+ 2.1.2. Creating custom options {#creating-custom-options}
 
 You can add your own custom options in addition to the standard
 `PipelineOptions`. To add your own options, define an interface with getter and
@@ -223,7 +218,7 @@ MyOptions options = PipelineOptionsFactory.fromArgs(args)
 
 Now your pipeline can accept `--myCustomOption=value` as a command-line 
argument.
 
-## 3. PCollections
+## 3. PCollections {#pcollections}
 
 The [PCollection]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)
 `PCollection` abstraction represents a
@@ -236,7 +231,7 @@ After you've created your `Pipeline`, you'll need to begin 
by creating at least
 one `PCollection` in some form. The `PCollection` you create serves as the 
input
 for the first operation in your pipeline.
 
-### 3.1. Creating a PCollection
+### 3.1. Creating a PCollection {#creating-a-pcollection}
 
 You create a `PCollection` by either reading data from an external source using
 Beam's [Source API](#pipeline-io), or you can create a `PCollection` of data
@@ -246,7 +241,7 @@ contain adapters to help you read from external sources 
like large cloud-based
 files, databases, or subscription services. The latter is primarily useful for
 testing and debugging purposes.
 
- 3.1.1. Reading from an external source
+ 3.1.1. Reading from an external source {#reading-external-source}
 
 To read from an external source, you use one of the [Beam-provided I/O
 adapters](#pipeline-io). The adapters vary in their exact usage, but all of 
them
@@ -283,7 +278,7 @@ public static void main(String[] args) {
 See the [section on I/O](#pipeline-io) to learn more about how to read from the
 various data sources supported by the Beam SDK.
 
- 3.1.2. Creating a PCollection from in-memory data
+ 3.1.2. Creating a PCollection from in-memory data 
{#creating-pcollection-in-memory}
 
 {:.language-java}
 To create a `PCollection` from an in-memory Java `Collection`, you use the
@@ -326,14 +321,14 @@ public static void main(String[] args) {
 %}
 ```
 
-### 3.2. PCollection characteristics
+### 3.2. PCollection characteristics {#pcollection-characteristics}
 
 A `PCollection` is owned by the specific `Pipeline` object for which it is
 created; multiple pipelines cannot share a `PCollection`. In some respects, a
 `PCollection` functions like a collection class. However, a `PCollection` can
 di

[beam-site] 02/03: Explicitly define section id due to kramdown id generation changes

2018-03-01 Thread mergebot-role
This is an automated email from the ASF dual-hosted git repository.

mergebot-role pushed a commit to branch mergebot
in repository https://gitbox.apache.org/repos/asf/beam-site.git

commit f4464258c309a9a9ef19efbd94dc276fe8b907e5
Author: melissa 
AuthorDate: Tue Feb 20 14:18:14 2018 -0800

Explicitly define section id due to kramdown id generation changes
---
 src/documentation/programming-guide.md | 195 -
 1 file changed, 95 insertions(+), 100 deletions(-)

diff --git a/src/documentation/programming-guide.md 
b/src/documentation/programming-guide.md
index 7f6aea5..6b86743 100644
--- a/src/documentation/programming-guide.md
+++ b/src/documentation/programming-guide.md
@@ -26,12 +26,7 @@ how to implement Beam concepts in your pipelines.
   
 
 
-**Table of Contents:**
-* TOC
-{:toc}
-
-
-## 1. Overview
+## 1. Overview {#overview}
 
 To use Beam, you need to first create a driver program using the classes in one
 of the Beam SDKs. Your driver program *defines* your pipeline, including all of
@@ -94,7 +89,7 @@ objects you've created and transforms that you've applied. 
That graph is then
 executed using the appropriate distributed processing back-end, becoming an
 asynchronous "job" (or equivalent) on that back-end.
 
-## 2. Creating a pipeline
+## 2. Creating a pipeline {#creating-a-pipeline}
 
 The `Pipeline` abstraction encapsulates all the data and steps in your data
 processing task. Your Beam driver program typically starts by constructing a
@@ -122,7 +117,7 @@ Pipeline p = Pipeline.create(options);
 %}
 ```
 
-### 2.1. Configuring pipeline options
+### 2.1. Configuring pipeline options {#configuring-pipeline-options}
 
 Use the pipeline options to configure different aspects of your pipeline, such
 as the pipeline runner that will execute your pipeline and any runner-specific
@@ -134,7 +129,7 @@ When you run the pipeline on a runner of your choice, a 
copy of the
 PipelineOptions will be available to your code. For example, you can read
 PipelineOptions from a DoFn's Context.
 
- 2.1.1. Setting PipelineOptions from command-line arguments
+ 2.1.1. Setting PipelineOptions from command-line arguments 
{#pipeline-options-cli}
 
 While you can configure your pipeline by creating a `PipelineOptions` object 
and
 setting the fields directly, the Beam SDKs include a command-line parser that
@@ -167,7 +162,7 @@ a command-line argument.
 > demonstrates how to set pipeline options at runtime by using command-line
 > options.
 
- 2.1.2. Creating custom options
+ 2.1.2. Creating custom options {#creating-custom-options}
 
 You can add your own custom options in addition to the standard
 `PipelineOptions`. To add your own options, define an interface with getter and
@@ -223,7 +218,7 @@ MyOptions options = PipelineOptionsFactory.fromArgs(args)
 
 Now your pipeline can accept `--myCustomOption=value` as a command-line 
argument.
 
-## 3. PCollections
+## 3. PCollections {#pcollections}
 
 The [PCollection]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)
 `PCollection` abstraction represents a
@@ -236,7 +231,7 @@ After you've created your `Pipeline`, you'll need to begin 
by creating at least
 one `PCollection` in some form. The `PCollection` you create serves as the 
input
 for the first operation in your pipeline.
 
-### 3.1. Creating a PCollection
+### 3.1. Creating a PCollection {#creating-a-pcollection}
 
 You create a `PCollection` by either reading data from an external source using
 Beam's [Source API](#pipeline-io), or you can create a `PCollection` of data
@@ -246,7 +241,7 @@ contain adapters to help you read from external sources 
like large cloud-based
 files, databases, or subscription services. The latter is primarily useful for
 testing and debugging purposes.
 
- 3.1.1. Reading from an external source
+ 3.1.1. Reading from an external source {#reading-external-source}
 
 To read from an external source, you use one of the [Beam-provided I/O
 adapters](#pipeline-io). The adapters vary in their exact usage, but all of 
them
@@ -283,7 +278,7 @@ public static void main(String[] args) {
 See the [section on I/O](#pipeline-io) to learn more about how to read from the
 various data sources supported by the Beam SDK.
 
- 3.1.2. Creating a PCollection from in-memory data
+ 3.1.2. Creating a PCollection from in-memory data 
{#creating-pcollection-in-memory}
 
 {:.language-java}
 To create a `PCollection` from an in-memory Java `Collection`, you use the
@@ -326,14 +321,14 @@ public static void main(String[] args) {
 %}
 ```
 
-### 3.2. PCollection characteristics
+### 3.2. PCollection characteristics {#pcollection-characteristics}
 
 A `PCollection` is owned by the specific `Pipeline` object for which it is
 created; multiple pipelines cannot share a `PCollection`. In some respects, a
 `PCollection` functions like a collection class. However, a `PCollection` can
 di

[beam-site] 02/03: Explicitly define section id due to kramdown id generation changes

2018-02-28 Thread mergebot-role
This is an automated email from the ASF dual-hosted git repository.

mergebot-role pushed a commit to branch mergebot
in repository https://gitbox.apache.org/repos/asf/beam-site.git

commit b47e381f27ea393f0034ca6fdecab0247ea7b100
Author: melissa 
AuthorDate: Tue Feb 20 14:18:14 2018 -0800

Explicitly define section id due to kramdown id generation changes
---
 src/documentation/programming-guide.md | 195 -
 1 file changed, 95 insertions(+), 100 deletions(-)

diff --git a/src/documentation/programming-guide.md 
b/src/documentation/programming-guide.md
index 7f6aea5..6b86743 100644
--- a/src/documentation/programming-guide.md
+++ b/src/documentation/programming-guide.md
@@ -26,12 +26,7 @@ how to implement Beam concepts in your pipelines.
   
 
 
-**Table of Contents:**
-* TOC
-{:toc}
-
-
-## 1. Overview
+## 1. Overview {#overview}
 
 To use Beam, you need to first create a driver program using the classes in one
 of the Beam SDKs. Your driver program *defines* your pipeline, including all of
@@ -94,7 +89,7 @@ objects you've created and transforms that you've applied. 
That graph is then
 executed using the appropriate distributed processing back-end, becoming an
 asynchronous "job" (or equivalent) on that back-end.
 
-## 2. Creating a pipeline
+## 2. Creating a pipeline {#creating-a-pipeline}
 
 The `Pipeline` abstraction encapsulates all the data and steps in your data
 processing task. Your Beam driver program typically starts by constructing a
@@ -122,7 +117,7 @@ Pipeline p = Pipeline.create(options);
 %}
 ```
 
-### 2.1. Configuring pipeline options
+### 2.1. Configuring pipeline options {#configuring-pipeline-options}
 
 Use the pipeline options to configure different aspects of your pipeline, such
 as the pipeline runner that will execute your pipeline and any runner-specific
@@ -134,7 +129,7 @@ When you run the pipeline on a runner of your choice, a 
copy of the
 PipelineOptions will be available to your code. For example, you can read
 PipelineOptions from a DoFn's Context.
 
- 2.1.1. Setting PipelineOptions from command-line arguments
+ 2.1.1. Setting PipelineOptions from command-line arguments 
{#pipeline-options-cli}
 
 While you can configure your pipeline by creating a `PipelineOptions` object 
and
 setting the fields directly, the Beam SDKs include a command-line parser that
@@ -167,7 +162,7 @@ a command-line argument.
 > demonstrates how to set pipeline options at runtime by using command-line
 > options.
 
- 2.1.2. Creating custom options
+ 2.1.2. Creating custom options {#creating-custom-options}
 
 You can add your own custom options in addition to the standard
 `PipelineOptions`. To add your own options, define an interface with getter and
@@ -223,7 +218,7 @@ MyOptions options = PipelineOptionsFactory.fromArgs(args)
 
 Now your pipeline can accept `--myCustomOption=value` as a command-line 
argument.
 
-## 3. PCollections
+## 3. PCollections {#pcollections}
 
 The [PCollection]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)
 `PCollection` abstraction represents a
@@ -236,7 +231,7 @@ After you've created your `Pipeline`, you'll need to begin 
by creating at least
 one `PCollection` in some form. The `PCollection` you create serves as the 
input
 for the first operation in your pipeline.
 
-### 3.1. Creating a PCollection
+### 3.1. Creating a PCollection {#creating-a-pcollection}
 
 You create a `PCollection` by either reading data from an external source using
 Beam's [Source API](#pipeline-io), or you can create a `PCollection` of data
@@ -246,7 +241,7 @@ contain adapters to help you read from external sources 
like large cloud-based
 files, databases, or subscription services. The latter is primarily useful for
 testing and debugging purposes.
 
- 3.1.1. Reading from an external source
+ 3.1.1. Reading from an external source {#reading-external-source}
 
 To read from an external source, you use one of the [Beam-provided I/O
 adapters](#pipeline-io). The adapters vary in their exact usage, but all of 
them
@@ -283,7 +278,7 @@ public static void main(String[] args) {
 See the [section on I/O](#pipeline-io) to learn more about how to read from the
 various data sources supported by the Beam SDK.
 
- 3.1.2. Creating a PCollection from in-memory data
+ 3.1.2. Creating a PCollection from in-memory data 
{#creating-pcollection-in-memory}
 
 {:.language-java}
 To create a `PCollection` from an in-memory Java `Collection`, you use the
@@ -326,14 +321,14 @@ public static void main(String[] args) {
 %}
 ```
 
-### 3.2. PCollection characteristics
+### 3.2. PCollection characteristics {#pcollection-characteristics}
 
 A `PCollection` is owned by the specific `Pipeline` object for which it is
 created; multiple pipelines cannot share a `PCollection`. In some respects, a
 `PCollection` functions like a collection class. However, a `PCollection` can
 di

[beam-site] 02/03: Explicitly define section id due to kramdown id generation changes

2018-02-28 Thread mergebot-role
This is an automated email from the ASF dual-hosted git repository.

mergebot-role pushed a commit to branch mergebot
in repository https://gitbox.apache.org/repos/asf/beam-site.git

commit 515cd4ea04eb6f8f4179c036db6914c894fa1a78
Author: melissa 
AuthorDate: Tue Feb 20 14:18:14 2018 -0800

Explicitly define section id due to kramdown id generation changes
---
 src/documentation/programming-guide.md | 195 -
 1 file changed, 95 insertions(+), 100 deletions(-)

diff --git a/src/documentation/programming-guide.md 
b/src/documentation/programming-guide.md
index 7f6aea5..6b86743 100644
--- a/src/documentation/programming-guide.md
+++ b/src/documentation/programming-guide.md
@@ -26,12 +26,7 @@ how to implement Beam concepts in your pipelines.
   
 
 
-**Table of Contents:**
-* TOC
-{:toc}
-
-
-## 1. Overview
+## 1. Overview {#overview}
 
 To use Beam, you need to first create a driver program using the classes in one
 of the Beam SDKs. Your driver program *defines* your pipeline, including all of
@@ -94,7 +89,7 @@ objects you've created and transforms that you've applied. 
That graph is then
 executed using the appropriate distributed processing back-end, becoming an
 asynchronous "job" (or equivalent) on that back-end.
 
-## 2. Creating a pipeline
+## 2. Creating a pipeline {#creating-a-pipeline}
 
 The `Pipeline` abstraction encapsulates all the data and steps in your data
 processing task. Your Beam driver program typically starts by constructing a
@@ -122,7 +117,7 @@ Pipeline p = Pipeline.create(options);
 %}
 ```
 
-### 2.1. Configuring pipeline options
+### 2.1. Configuring pipeline options {#configuring-pipeline-options}
 
 Use the pipeline options to configure different aspects of your pipeline, such
 as the pipeline runner that will execute your pipeline and any runner-specific
@@ -134,7 +129,7 @@ When you run the pipeline on a runner of your choice, a 
copy of the
 PipelineOptions will be available to your code. For example, you can read
 PipelineOptions from a DoFn's Context.
 
- 2.1.1. Setting PipelineOptions from command-line arguments
+ 2.1.1. Setting PipelineOptions from command-line arguments 
{#pipeline-options-cli}
 
 While you can configure your pipeline by creating a `PipelineOptions` object 
and
 setting the fields directly, the Beam SDKs include a command-line parser that
@@ -167,7 +162,7 @@ a command-line argument.
 > demonstrates how to set pipeline options at runtime by using command-line
 > options.
 
- 2.1.2. Creating custom options
+ 2.1.2. Creating custom options {#creating-custom-options}
 
 You can add your own custom options in addition to the standard
 `PipelineOptions`. To add your own options, define an interface with getter and
@@ -223,7 +218,7 @@ MyOptions options = PipelineOptionsFactory.fromArgs(args)
 
 Now your pipeline can accept `--myCustomOption=value` as a command-line 
argument.
 
-## 3. PCollections
+## 3. PCollections {#pcollections}
 
 The [PCollection]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)
 `PCollection` abstraction represents a
@@ -236,7 +231,7 @@ After you've created your `Pipeline`, you'll need to begin 
by creating at least
 one `PCollection` in some form. The `PCollection` you create serves as the 
input
 for the first operation in your pipeline.
 
-### 3.1. Creating a PCollection
+### 3.1. Creating a PCollection {#creating-a-pcollection}
 
 You create a `PCollection` by either reading data from an external source using
 Beam's [Source API](#pipeline-io), or you can create a `PCollection` of data
@@ -246,7 +241,7 @@ contain adapters to help you read from external sources 
like large cloud-based
 files, databases, or subscription services. The latter is primarily useful for
 testing and debugging purposes.
 
- 3.1.1. Reading from an external source
+ 3.1.1. Reading from an external source {#reading-external-source}
 
 To read from an external source, you use one of the [Beam-provided I/O
 adapters](#pipeline-io). The adapters vary in their exact usage, but all of 
them
@@ -283,7 +278,7 @@ public static void main(String[] args) {
 See the [section on I/O](#pipeline-io) to learn more about how to read from the
 various data sources supported by the Beam SDK.
 
- 3.1.2. Creating a PCollection from in-memory data
+ 3.1.2. Creating a PCollection from in-memory data 
{#creating-pcollection-in-memory}
 
 {:.language-java}
 To create a `PCollection` from an in-memory Java `Collection`, you use the
@@ -326,14 +321,14 @@ public static void main(String[] args) {
 %}
 ```
 
-### 3.2. PCollection characteristics
+### 3.2. PCollection characteristics {#pcollection-characteristics}
 
 A `PCollection` is owned by the specific `Pipeline` object for which it is
 created; multiple pipelines cannot share a `PCollection`. In some respects, a
 `PCollection` functions like a collection class. However, a `PCollection` can
 di

[beam-site] 02/03: Explicitly define section id due to kramdown id generation changes

2018-02-28 Thread mergebot-role
This is an automated email from the ASF dual-hosted git repository.

mergebot-role pushed a commit to branch mergebot
in repository https://gitbox.apache.org/repos/asf/beam-site.git

commit 4eae6c26fadbc19e7760cde967060a995b6a0efe
Author: melissa 
AuthorDate: Tue Feb 20 14:18:14 2018 -0800

Explicitly define section id due to kramdown id generation changes
---
 src/documentation/programming-guide.md | 195 -
 1 file changed, 95 insertions(+), 100 deletions(-)

diff --git a/src/documentation/programming-guide.md 
b/src/documentation/programming-guide.md
index 7f6aea5..6b86743 100644
--- a/src/documentation/programming-guide.md
+++ b/src/documentation/programming-guide.md
@@ -26,12 +26,7 @@ how to implement Beam concepts in your pipelines.
   
 
 
-**Table of Contents:**
-* TOC
-{:toc}
-
-
-## 1. Overview
+## 1. Overview {#overview}
 
 To use Beam, you need to first create a driver program using the classes in one
 of the Beam SDKs. Your driver program *defines* your pipeline, including all of
@@ -94,7 +89,7 @@ objects you've created and transforms that you've applied. 
That graph is then
 executed using the appropriate distributed processing back-end, becoming an
 asynchronous "job" (or equivalent) on that back-end.
 
-## 2. Creating a pipeline
+## 2. Creating a pipeline {#creating-a-pipeline}
 
 The `Pipeline` abstraction encapsulates all the data and steps in your data
 processing task. Your Beam driver program typically starts by constructing a
@@ -122,7 +117,7 @@ Pipeline p = Pipeline.create(options);
 %}
 ```
 
-### 2.1. Configuring pipeline options
+### 2.1. Configuring pipeline options {#configuring-pipeline-options}
 
 Use the pipeline options to configure different aspects of your pipeline, such
 as the pipeline runner that will execute your pipeline and any runner-specific
@@ -134,7 +129,7 @@ When you run the pipeline on a runner of your choice, a 
copy of the
 PipelineOptions will be available to your code. For example, you can read
 PipelineOptions from a DoFn's Context.
 
- 2.1.1. Setting PipelineOptions from command-line arguments
+ 2.1.1. Setting PipelineOptions from command-line arguments 
{#pipeline-options-cli}
 
 While you can configure your pipeline by creating a `PipelineOptions` object 
and
 setting the fields directly, the Beam SDKs include a command-line parser that
@@ -167,7 +162,7 @@ a command-line argument.
 > demonstrates how to set pipeline options at runtime by using command-line
 > options.
 
- 2.1.2. Creating custom options
+ 2.1.2. Creating custom options {#creating-custom-options}
 
 You can add your own custom options in addition to the standard
 `PipelineOptions`. To add your own options, define an interface with getter and
@@ -223,7 +218,7 @@ MyOptions options = PipelineOptionsFactory.fromArgs(args)
 
 Now your pipeline can accept `--myCustomOption=value` as a command-line 
argument.
 
-## 3. PCollections
+## 3. PCollections {#pcollections}
 
 The [PCollection]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)
 `PCollection` abstraction represents a
@@ -236,7 +231,7 @@ After you've created your `Pipeline`, you'll need to begin 
by creating at least
 one `PCollection` in some form. The `PCollection` you create serves as the 
input
 for the first operation in your pipeline.
 
-### 3.1. Creating a PCollection
+### 3.1. Creating a PCollection {#creating-a-pcollection}
 
 You create a `PCollection` by either reading data from an external source using
 Beam's [Source API](#pipeline-io), or you can create a `PCollection` of data
@@ -246,7 +241,7 @@ contain adapters to help you read from external sources 
like large cloud-based
 files, databases, or subscription services. The latter is primarily useful for
 testing and debugging purposes.
 
- 3.1.1. Reading from an external source
+ 3.1.1. Reading from an external source {#reading-external-source}
 
 To read from an external source, you use one of the [Beam-provided I/O
 adapters](#pipeline-io). The adapters vary in their exact usage, but all of 
them
@@ -283,7 +278,7 @@ public static void main(String[] args) {
 See the [section on I/O](#pipeline-io) to learn more about how to read from the
 various data sources supported by the Beam SDK.
 
- 3.1.2. Creating a PCollection from in-memory data
+ 3.1.2. Creating a PCollection from in-memory data 
{#creating-pcollection-in-memory}
 
 {:.language-java}
 To create a `PCollection` from an in-memory Java `Collection`, you use the
@@ -326,14 +321,14 @@ public static void main(String[] args) {
 %}
 ```
 
-### 3.2. PCollection characteristics
+### 3.2. PCollection characteristics {#pcollection-characteristics}
 
 A `PCollection` is owned by the specific `Pipeline` object for which it is
 created; multiple pipelines cannot share a `PCollection`. In some respects, a
 `PCollection` functions like a collection class. However, a `PCollection` can
 di