Regenerate website

Project: http://git-wip-us.apache.org/repos/asf/beam-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam-site/commit/77d285ff
Tree: http://git-wip-us.apache.org/repos/asf/beam-site/tree/77d285ff
Diff: http://git-wip-us.apache.org/repos/asf/beam-site/diff/77d285ff

Branch: refs/heads/asf-site
Commit: 77d285ff8445e6a9902741f21c7a63bcd07ff47e
Parents: dca566f
Author: Davor Bonaci <da...@google.com>
Authored: Wed Feb 22 13:33:58 2017 -0800
Committer: Davor Bonaci <da...@google.com>
Committed: Wed Feb 22 13:33:58 2017 -0800

----------------------------------------------------------------------
 .../sdks/python-custom-io/index.html            | 48 ++++++++++----------
 content/documentation/sdks/python/index.html    |  4 +-
 2 files changed, 26 insertions(+), 26 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/beam-site/blob/77d285ff/content/documentation/sdks/python-custom-io/index.html
----------------------------------------------------------------------
diff --git a/content/documentation/sdks/python-custom-io/index.html 
b/content/documentation/sdks/python-custom-io/index.html
index c43f606..7592a4a 100644
--- a/content/documentation/sdks/python-custom-io/index.html
+++ b/content/documentation/sdks/python-custom-io/index.html
@@ -6,7 +6,7 @@
   <meta http-equiv="X-UA-Compatible" content="IE=edge">
   <meta name="viewport" content="width=device-width, initial-scale=1">
 
-  <title>Beam Custom Sources and Sinks for Python</title>
+  <title>Apache Beam: Creating New Sources and Sinks with the Python 
SDK</title>
   <meta name="description" content="Apache Beam is an open source, unified 
model and set of language-specific SDKs for defining and executing data 
processing workflows, and also data ingestion and integration flows, supporting 
Enterprise Integration Patterns (EIPs) and Domain Specific Languages (DSLs). 
Dataflow pipelines simplify the mechanics of large-scale batch and streaming 
data processing and can run on a number of runtimes like Apache Flink, Apache 
Spark, and Google Cloud Dataflow (a cloud service). Beam also brings DSL in 
different languages, allowing users to easily implement their data integration 
processes.
 ">
 
@@ -146,24 +146,24 @@
     <div class="container" role="main">
 
       <div class="row">
-        <h1 id="beam-custom-sources-and-sinks-for-python">Beam Custom Sources 
and Sinks for Python</h1>
+        <h1 id="creating-new-sources-and-sinks-with-the-python-sdk">Creating 
New Sources and Sinks with the Python SDK</h1>
 
-<p>The Beam SDK for Python provides an extensible API that you can use to 
create custom data sources and sinks. This tutorial shows how to create custom 
sources and sinks using <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py";>Beam’s
 Source and Sink API</a>.</p>
+<p>The Apache Beam SDK for Python provides an extensible API that you can use 
to create new data sources and sinks. This tutorial shows how to create new 
sources and sinks using <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py";>Beam’s
 Source and Sink API</a>.</p>
 
 <ul>
-  <li>Create a custom source by extending the <code 
class="highlighter-rouge">BoundedSource</code> and <code 
class="highlighter-rouge">RangeTracker</code> interfaces.</li>
-  <li>Create a custom sink by implementing the <code 
class="highlighter-rouge">Sink</code> and <code 
class="highlighter-rouge">Writer</code> classes.</li>
+  <li>Create a new source by extending the <code 
class="highlighter-rouge">BoundedSource</code> and <code 
class="highlighter-rouge">RangeTracker</code> interfaces.</li>
+  <li>Create a new sink by implementing the <code 
class="highlighter-rouge">Sink</code> and <code 
class="highlighter-rouge">Writer</code> classes.</li>
 </ul>
 
-<h2 id="why-create-a-custom-source-or-sink">Why Create a Custom Source or 
Sink</h2>
+<h2 id="why-create-a-new-source-or-sink">Why Create a New Source or Sink</h2>
 
-<p>You’ll need to create a custom source or sink if you want your pipeline 
to read data from (or write data to) a storage system for which the Beam SDK 
for Python does not provide <a 
href="/documentation/programming-guide/#io">native support</a>.</p>
+<p>You’ll need to create a new source or sink if you want your pipeline to 
read data from (or write data to) a storage system for which the Beam SDK for 
Python does not provide <a href="/documentation/programming-guide/#io">native 
support</a>.</p>
 
-<p>In simple cases, you may not need to create a custom source or sink. For 
example, if you need to read data from an SQL database using an arbitrary 
query, none of the advanced Source API features would benefit you. Likewise, if 
you’d like to write data to a third-party API via a protocol that lacks 
deduplication support, the Sink API wouldn’t benefit you. In such cases it 
makes more sense to use a <code class="highlighter-rouge">ParDo</code>.</p>
+<p>In simple cases, you may not need to create a new source or sink. For 
example, if you need to read data from an SQL database using an arbitrary 
query, none of the advanced Source API features would benefit you. Likewise, if 
you’d like to write data to a third-party API via a protocol that lacks 
deduplication support, the Sink API wouldn’t benefit you. In such cases it 
makes more sense to use a <code class="highlighter-rouge">ParDo</code>.</p>
 
-<p>However, if you’d like to use advanced features such as dynamic splitting 
and size estimation, you should use Beam’s APIs and create a custom source or 
sink.</p>
+<p>However, if you’d like to use advanced features such as dynamic splitting 
and size estimation, you should use Beam’s APIs and create a new source or 
sink.</p>
 
-<h2 
id="a-namebasic-code-reqsabasic-code-requirements-for-custom-sources-and-sinks"><a
 name="basic-code-reqs"></a>Basic Code Requirements for Custom Sources and 
Sinks</h2>
+<h2 
id="a-namebasic-code-reqsabasic-code-requirements-for-new-sources-and-sinks"><a 
name="basic-code-reqs"></a>Basic Code Requirements for New Sources and 
Sinks</h2>
 
 <p>Services use the classes you provide to read and/or write data using 
multiple worker instances in parallel. As such, the code you provide for <code 
class="highlighter-rouge">Source</code> and <code 
class="highlighter-rouge">Sink</code> subclasses must meet some basic 
requirements:</p>
 
@@ -185,9 +185,9 @@
 
 <p>You can use test harnesses and utility methods available in the <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/source_test_utils.py";>source_test_utils
 module</a> to develop tests for your source.</p>
 
-<h2 id="a-namecreating-sourcesacreating-a-custom-source"><a 
name="creating-sources"></a>Creating a Custom Source</h2>
+<h2 id="a-namecreating-sourcesacreating-a-new-source"><a 
name="creating-sources"></a>Creating a New Source</h2>
 
-<p>You should create a custom source if you’d like to use the advanced 
features that the Source API provides:</p>
+<p>You should create a new source if you’d like to use the advanced features 
that the Source API provides:</p>
 
 <ul>
   <li>Dynamic splitting</li>
@@ -198,9 +198,9 @@
 
 <p>For example, if you’d like to read from a new file format that contains 
many records per file, or if you’d like to read from a key-value store that 
supports read operations in sorted key order.</p>
 
-<p>To create a custom data source for your pipeline, you’ll need to provide 
the format-specific logic that tells the service how to read data from your 
input source, and how to split your data source into multiple parts so that 
multiple worker instances can read your data in parallel.</p>
+<p>To create a new data source for your pipeline, you’ll need to provide the 
format-specific logic that tells the service how to read data from your input 
source, and how to split your data source into multiple parts so that multiple 
worker instances can read your data in parallel.</p>
 
-<p>You supply the logic for your custom source by creating the following 
classes:</p>
+<p>You supply the logic for your new source by creating the following 
classes:</p>
 
 <ul>
   <li>A subclass of <code class="highlighter-rouge">BoundedSource</code>, 
which you can find in the <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py";>iobase.py</a>
 module. <code class="highlighter-rouge">BoundedSource</code> is a source that 
reads a finite amount of input records. The class describes the data you want 
to read, including the data’s location and parameters (such as how much data 
to read).</li>
@@ -330,7 +330,7 @@
 
 <p>See <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/avroio.py";>AvroSource</a>
 for an example implementation of <code 
class="highlighter-rouge">FileBasedSource</code>.</p>
 
-<h2 id="a-namereading-sourcesareading-from-a-custom-source"><a 
name="reading-sources"></a>Reading from a Custom Source</h2>
+<h2 id="a-namereading-sourcesareading-from-a-new-source"><a 
name="reading-sources"></a>Reading from a New Source</h2>
 
 <p>The following example, <code 
class="highlighter-rouge">CountingSource</code>, demonstrates an implementation 
of <code class="highlighter-rouge">BoundedSource</code> and uses the 
SDK-provided <code class="highlighter-rouge">RangeTracker</code> called <code 
class="highlighter-rouge">OffsetRangeTracker</code>.</p>
 
@@ -374,7 +374,7 @@
 </code></pre>
 </div>
 
-<p>To read data from a custom source in your pipeline, use the <code 
class="highlighter-rouge">Read</code> transform:</p>
+<p>To read data from the source in your pipeline, use the <code 
class="highlighter-rouge">Read</code> transform:</p>
 
 <div class="highlighter-rouge"><pre class="highlight"><code>p = 
beam.Pipeline(options=PipelineOptions())
 numbers = p | 'ProduceNumbers' &gt;&gt; beam.io.Read(CountingSource(count))
@@ -383,9 +383,9 @@ numbers = p | 'ProduceNumbers' &gt;&gt; 
beam.io.Read(CountingSource(count))
 
 <p><strong>Note:</strong> When you create a source that end-users are going to 
use, it’s recommended that you do not expose the code for the source itself 
as demonstrated in the example above, but rather use a wrapping <code 
class="highlighter-rouge">PTransform</code> instead. See <a 
href="#ptransform-wrappers">PTransform wrappers</a> to see how and why to avoid 
exposing your sources.</p>
 
-<h2 id="a-namecreating-sinksacreating-a-custom-sink"><a 
name="creating-sinks"></a>Creating a Custom Sink</h2>
+<h2 id="a-namecreating-sinksacreating-a-new-sink"><a 
name="creating-sinks"></a>Creating a New Sink</h2>
 
-<p>You should create a custom sink if you’d like to use the advanced 
features that the Sink API provides, such as global initialization and 
finalization that allow the write operation to appear “atomic” (i.e. either 
all data is written or none is).</p>
+<p>You should create a new sink if you’d like to use the advanced features 
that the Sink API provides, such as global initialization and finalization that 
allow the write operation to appear “atomic” (i.e. either all data is 
written or none is).</p>
 
 <p>A sink represents a resource that can be written to using the <code 
class="highlighter-rouge">Write</code> transform. A parallel write to a sink 
consists of three phases:</p>
 
@@ -397,7 +397,7 @@ numbers = p | 'ProduceNumbers' &gt;&gt; 
beam.io.Read(CountingSource(count))
 
 <p>For example, if you’d like to write to a new table in a database, you 
should use the Sink API. In this case, the initializer will create a temporary 
table, the writer will write rows to it, and the finalizer will rename the 
table to a final location.</p>
 
-<p>To create a custom data sink for your pipeline, you’ll need to provide 
the format-specific logic that tells the sink how to write bounded data from 
your pipeline’s <code class="highlighter-rouge">PCollection</code>s to an 
output sink. The sink writes bundles of data in parallel using multiple 
workers.</p>
+<p>To create a new data sink for your pipeline, you’ll need to provide the 
format-specific logic that tells the sink how to write bounded data from your 
pipeline’s <code class="highlighter-rouge">PCollection</code>s to an output 
sink. The sink writes bundles of data in parallel using multiple workers.</p>
 
 <p>You supply the writing logic by creating the following classes:</p>
 
@@ -465,7 +465,7 @@ numbers = p | 'ProduceNumbers' &gt;&gt; 
beam.io.Read(CountingSource(count))
   <li>Setting the output MIME type</li>
 </ul>
 
-<h2 id="a-namewriting-sinksawriting-to-a-custom-sink"><a 
name="writing-sinks"></a>Writing to a Custom Sink</h2>
+<h2 id="a-namewriting-sinksawriting-to-a-new-sink"><a 
name="writing-sinks"></a>Writing to a New Sink</h2>
 
 <p>Consider a simple key-value storage that writes a given set of key-value 
pairs to a set of tables. The following is the key-value storage’s API:</p>
 
@@ -532,15 +532,15 @@ kvs | 'WriteToSimpleKV' &gt;&gt; beam.io.Write(
 
 <h2 id="a-nameptransform-wrappersaptransform-wrappers"><a 
name="ptransform-wrappers"></a>PTransform Wrappers</h2>
 
-<p>If you create a custom source or sink for your own use, such as for 
learning purposes, you should create them as explained in the sections above 
and use them as demonstrated in the examples.</p>
+<p>If you create a new source or sink for your own use, such as for learning 
purposes, you should create them as explained in the sections above and use 
them as demonstrated in the examples.</p>
 
-<p>However, when you create a source or sink that end-users are going to use, 
instead of exposing the source or sink itself, you should create a wrapper 
<code class="highlighter-rouge">PTransform</code>. Ideally, a custom source or 
sink should be exposed to users simply as “something that can be applied in a 
pipeline”, which is actually a <code 
class="highlighter-rouge">PTransform</code>. That way, its implementation can 
be hidden and arbitrarily complex or simple.</p>
+<p>However, when you create a source or sink that end-users are going to use, 
instead of exposing the source or sink itself, you should create a wrapper 
<code class="highlighter-rouge">PTransform</code>. Ideally, a source or sink 
should be exposed to users simply as “something that can be applied in a 
pipeline”, which is actually a <code 
class="highlighter-rouge">PTransform</code>. That way, its implementation can 
be hidden and arbitrarily complex or simple.</p>
 
 <p>The greatest benefit of not exposing the implementation details is that 
later on you will be able to add additional functionality without breaking the 
existing implementation for users.  For example, if your users’ pipelines 
read from your source using <code 
class="highlighter-rouge">beam.io.Read(...)</code> and you want to insert a 
reshard into the pipeline, all of your users would need to add the reshard 
themselves (using the <code class="highlighter-rouge">GroupByKey</code> 
transform). To solve this, it’s recommended that you expose your source as a 
composite <code class="highlighter-rouge">PTransform</code> that performs both 
the read operation and the reshard.</p>
 
-<p>To avoid exposing your custom sources and sinks to end-users, it’s 
recommended that you use the <code class="highlighter-rouge">_</code> prefix 
when creating your custom source and sink classes. Then, create a wrapper <code 
class="highlighter-rouge">PTransform</code>.</p>
+<p>To avoid exposing your sources and sinks to end-users, it’s recommended 
that you use the <code class="highlighter-rouge">_</code> prefix when creating 
your new source and sink classes. Then, create a wrapper <code 
class="highlighter-rouge">PTransform</code>.</p>
 
-<p>The following examples change the custom source and sink from the above 
sections so that they are not exposed to end-users. For the source, rename 
<code class="highlighter-rouge">CountingSource</code> to <code 
class="highlighter-rouge">_CountingSource</code>. Then, create the wrapper 
<code class="highlighter-rouge">PTransform</code>, called <code 
class="highlighter-rouge">ReadFromCountingSource</code>:</p>
+<p>The following examples change the source and sink from the above sections 
so that they are not exposed to end-users. For the source, rename <code 
class="highlighter-rouge">CountingSource</code> to <code 
class="highlighter-rouge">_CountingSource</code>. Then, create the wrapper 
<code class="highlighter-rouge">PTransform</code>, called <code 
class="highlighter-rouge">ReadFromCountingSource</code>:</p>
 
 <div class="highlighter-rouge"><pre class="highlight"><code>class 
ReadFromCountingSource(PTransform):
 

http://git-wip-us.apache.org/repos/asf/beam-site/blob/77d285ff/content/documentation/sdks/python/index.html
----------------------------------------------------------------------
diff --git a/content/documentation/sdks/python/index.html 
b/content/documentation/sdks/python/index.html
index 24573cf..aa6eb71 100644
--- a/content/documentation/sdks/python/index.html
+++ b/content/documentation/sdks/python/index.html
@@ -164,9 +164,9 @@
 
 <p>When you run your pipeline locally, the packages that your pipeline depends 
on are available because they are installed on your local machine. However, 
when you want to run your pipeline remotely, you must make sure these 
dependencies are available on the remote machines. <a 
href="/documentation/sdks/python-pipeline-dependencies">Managing Python 
Pipeline Dependencies</a> shows you how to make your dependencies available to 
the remote workers.</p>
 
-<h2 id="custom-sources-and-sinks">Custom Sources and Sinks</h2>
+<h2 id="creating-new-sources-and-sinks">Creating New Sources and Sinks</h2>
 
-<p>The Beam SDK for Python provides an extensible API that you can use to 
create custom data sources and sinks. The <a 
href="/documentation/sdks/python-custom-io">Custom Sources and Sinks for Python 
tutorial</a> shows how to create custom sources and sinks using <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py";>Beam’s
 Source and Sink API</a>.</p>
+<p>The Beam SDK for Python provides an extensible API that you can use to 
create new data sources and sinks. <a 
href="/documentation/sdks/python-custom-io">Creating New Sources and Sinks with 
the Python SDK</a> shows how to create new sources and sinks using <a 
href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py";>Beam’s
 Source and Sink API</a>.</p>
 
 
       </div>

Reply via email to