This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new b4f9f03  Publishing website 2020/03/05 21:39:09 at commit 35beffc
b4f9f03 is described below

commit b4f9f03ef53d2da38d882eab5ca427e583cba658
Author: jenkins <bui...@apache.org>
AuthorDate: Thu Mar 5 21:39:09 2020 +0000

    Publishing website 2020/03/05 21:39:09 at commit 35beffc
---
 .../capability/2016/03/17/capability-matrix.html   |  4 +-
 .../beam/release/2016/06/15/first-release.html     |  2 +-
 .../2016/10/11/strata-hadoop-world-and-beam.html   |  2 +-
 .../2016/06/13/flink-batch-runner-milestone.html   |  2 +-
 .../blog/2017/01/09/added-apex-runner.html         |  2 +-
 .../blog/2017/02/01/graduation-media-recap.html    | 10 ++--
 .../blog/2017/08/16/splittable-do-fn.html          | 16 +++----
 .../08/20/review-input-streaming-connectors.html   | 14 +++---
 .../blog/2019/05/11/beam-summit-europe-2019.html   |  2 +-
 .../2019/06/04/adding-data-sources-to-sql.html     |  2 +-
 website/generated-content/blog/index.html          | 10 ++--
 .../community/contact-us/index.html                |  4 +-
 .../contribute/committer-guide/index.html          |  4 +-
 .../contribute/ptransform-style-guide/index.html   |  8 ++--
 .../contribute/release-guide/index.html            |  4 +-
 .../dsls/sql/calcite/data-types/index.html         |  2 +-
 .../dsls/sql/calcite/lexical/index.html            |  2 +-
 .../dsls/sql/calcite/overview/index.html           | 56 +++++++++++-----------
 .../dsls/sql/calcite/query-syntax/index.html       |  4 +-
 .../documentation/dsls/sql/overview/index.html     |  2 +-
 .../dsls/sql/zetasql/data-types/index.html         |  6 +--
 .../dsls/sql/zetasql/lexical/index.html            |  4 +-
 website/generated-content/documentation/index.html | 12 ++---
 .../io/built-in/google-bigquery/index.html         |  2 +-
 .../documentation/io/testing/index.html            |  2 +-
 .../pipelines/test-your-pipeline/index.html        |  2 +-
 .../resources/learning-resources/index.html        |  2 +-
 .../documentation/runners/apex/index.html          | 10 ++--
 .../runners/capability-matrix/index.html           |  4 +-
 .../documentation/runners/gearpump/index.html      |  4 +-
 .../documentation/runners/mapreduce/index.html     |  2 +-
 .../documentation/runners/nemo/index.html          |  2 +-
 .../documentation/runners/samza/index.html         |  4 +-
 .../documentation/runners/spark/index.html         | 18 +++----
 .../documentation/sdks/java/euphoria/index.html    |  2 +-
 .../java/aggregation/hllcount/index.html           |  2 +-
 website/generated-content/feed.xml                 |  2 +-
 .../get-started/beam-overview/index.html           |  4 +-
 .../get-started/downloads/index.html               | 50 +++++++++----------
 website/generated-content/get-started/index.html   |  2 +-
 .../get-started/quickstart-java/index.html         |  4 +-
 .../get-started/quickstart-py/index.html           |  2 +-
 website/generated-content/index.html               | 10 ++--
 .../generated-content/privacy_policy/index.html    |  2 +-
 44 files changed, 153 insertions(+), 153 deletions(-)

diff --git 
a/website/generated-content/beam/capability/2016/03/17/capability-matrix.html 
b/website/generated-content/beam/capability/2016/03/17/capability-matrix.html
index 93af227..98fa9ed 100644
--- 
a/website/generated-content/beam/capability/2016/03/17/capability-matrix.html
+++ 
b/website/generated-content/beam/capability/2016/03/17/capability-matrix.html
@@ -195,9 +195,9 @@ limitations under the License.
 
 <!--more-->
 
-<p>While we’d love to have a world where all runners support the full suite of 
semantics included in the Beam Model (formerly referred to as the <a 
href="http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf";>Dataflow Model</a>), 
practically speaking, there will always be certain features that some runners 
can’t provide. For example, a Hadoop-based runner would be inherently 
batch-based and may be unable to (easily) implement support for unbounded 
collections. However, that doesn’t prevent it  [...]
+<p>While we’d love to have a world where all runners support the full suite of 
semantics included in the Beam Model (formerly referred to as the <a 
href="https://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf";>Dataflow Model</a>), 
practically speaking, there will always be certain features that some runners 
can’t provide. For example, a Hadoop-based runner would be inherently 
batch-based and may be unable to (easily) implement support for unbounded 
collections. However, that doesn’t prevent it [...]
 
-<p>To help clarify things, we’ve been working on enumerating the key features 
of the Beam model in a <a 
href="/documentation/runners/capability-matrix/">capability matrix</a> for all 
existing runners, categorized around the four key questions addressed by the 
model: <span class="wwwh-what-dark">What</span> / <span 
class="wwwh-where-dark">Where</span> / <span class="wwwh-when-dark">When</span> 
/ <span class="wwwh-how-dark">How</span> (if you’re not familiar with those 
questions, you might [...]
+<p>To help clarify things, we’ve been working on enumerating the key features 
of the Beam model in a <a 
href="/documentation/runners/capability-matrix/">capability matrix</a> for all 
existing runners, categorized around the four key questions addressed by the 
model: <span class="wwwh-what-dark">What</span> / <span 
class="wwwh-where-dark">Where</span> / <span class="wwwh-when-dark">When</span> 
/ <span class="wwwh-how-dark">How</span> (if you’re not familiar with those 
questions, you might [...]
 
 <p>Included below is a summary snapshot of our current understanding of the 
capabilities of the existing runners (see the <a 
href="/documentation/runners/capability-matrix/">live version</a> for full 
details, descriptions, and Jira links); since integration is still under way, 
the system as whole isn’t yet in a completely stable, usable state. But that 
should be changing in the near future, and we’ll be updating loud and clear on 
this blog when the first supported Beam 1.0 release happens.</p>
 
diff --git 
a/website/generated-content/beam/release/2016/06/15/first-release.html 
b/website/generated-content/beam/release/2016/06/15/first-release.html
index e4c75fc..164a245 100644
--- a/website/generated-content/beam/release/2016/06/15/first-release.html
+++ b/website/generated-content/beam/release/2016/06/15/first-release.html
@@ -202,7 +202,7 @@ this year.</p>
 making them readily available for our users. The initial release includes the
 SDK for Java, along with three runners: Apache Flink, Apache Spark and Google
 Cloud Dataflow, a fully-managed cloud service. The release is available both
-in the <a 
href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.beam%22";>Maven
 Central Repository</a>,
+in the <a 
href="https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.beam%22";>Maven
 Central Repository</a>,
 as well as a download from the <a href="/get-started/downloads/">project’s 
website</a>.</p>
 
 <p>The goal of this release was process-oriented. In particular, the Beam
diff --git 
a/website/generated-content/beam/update/2016/10/11/strata-hadoop-world-and-beam.html
 
b/website/generated-content/beam/update/2016/10/11/strata-hadoop-world-and-beam.html
index 7516e88..f954ee2 100644
--- 
a/website/generated-content/beam/update/2016/10/11/strata-hadoop-world-and-beam.html
+++ 
b/website/generated-content/beam/update/2016/10/11/strata-hadoop-world-and-beam.html
@@ -191,7 +191,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-<p>Tyler Akidau and I gave a <a 
href="http://conferences.oreilly.com/strata/hadoop-big-data-ny/public/schedule/detail/52129";>three-hour
 tutorial</a> on Apache Beam at Strata+Hadoop World 2016. We had a plethora of 
help from our TAs: Kenn Knowles, Reuven Lax, Felipe Hoffa, Slava Chernyak, and 
Jamie Grier. There were a total of 66 people that attended the 
session.<!--more--></p>
+<p>Tyler Akidau and I gave a <a 
href="https://conferences.oreilly.com/strata/hadoop-big-data-ny/public/schedule/detail/52129";>three-hour
 tutorial</a> on Apache Beam at Strata+Hadoop World 2016. We had a plethora of 
help from our TAs: Kenn Knowles, Reuven Lax, Felipe Hoffa, Slava Chernyak, and 
Jamie Grier. There were a total of 66 people that attended the 
session.<!--more--></p>
 
 <p><img src="/images/blog/IMG_20160927_170956.jpg" alt="Exercise time" /></p>
 
diff --git 
a/website/generated-content/blog/2016/06/13/flink-batch-runner-milestone.html 
b/website/generated-content/blog/2016/06/13/flink-batch-runner-milestone.html
index 793f4df..1260d81 100644
--- 
a/website/generated-content/blog/2016/06/13/flink-batch-runner-milestone.html
+++ 
b/website/generated-content/blog/2016/06/13/flink-batch-runner-milestone.html
@@ -190,7 +190,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either 
express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
 -->
-<p>We recently achieved a major milestone by adding support for windowing to 
the <a href="http://flink.apache.org";>Apache Flink</a> Batch runner. In this 
post we would like to explain what this means for users of Apache Beam and 
highlight some of the implementation details.</p>
+<p>We recently achieved a major milestone by adding support for windowing to 
the <a href="https://flink.apache.org";>Apache Flink</a> Batch runner. In this 
post we would like to explain what this means for users of Apache Beam and 
highlight some of the implementation details.</p>
 
 <!--more-->
 
diff --git a/website/generated-content/blog/2017/01/09/added-apex-runner.html 
b/website/generated-content/blog/2017/01/09/added-apex-runner.html
index d50070c..d7dbe82 100644
--- a/website/generated-content/blog/2017/01/09/added-apex-runner.html
+++ b/website/generated-content/blog/2017/01/09/added-apex-runner.html
@@ -191,7 +191,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-<p>The latest release 0.4.0 of <a href="/">Apache Beam</a> adds a new runner 
for <a href="http://apex.apache.org/";>Apache Apex</a>. We are excited to reach 
this initial milestone and are looking forward to continued collaboration 
between the Beam and Apex communities to advance the runner.</p>
+<p>The latest release 0.4.0 of <a href="/">Apache Beam</a> adds a new runner 
for <a href="https://apex.apache.org/";>Apache Apex</a>. We are excited to reach 
this initial milestone and are looking forward to continued collaboration 
between the Beam and Apex communities to advance the runner.</p>
 
 <!--more-->
 
diff --git 
a/website/generated-content/blog/2017/02/01/graduation-media-recap.html 
b/website/generated-content/blog/2017/02/01/graduation-media-recap.html
index 7b7aad0..5ce0621 100644
--- a/website/generated-content/blog/2017/02/01/graduation-media-recap.html
+++ b/website/generated-content/blog/2017/02/01/graduation-media-recap.html
@@ -215,15 +215,15 @@ others. You can read more in the following blog posts:</p>
 and followed by coverage in many independent outlets. Some of those in English
 include:</p>
 <ul>
-  <li>ZDNet: “<a 
href="http://www.zdnet.com/article/apache-beam-and-spark-new-coopetition-for-squashing-the-lambda-architecture/";>Apache
 Beam and Spark: New coopetition for squashing the Lambda Architecture?</a>” by 
Tony Baer.</li>
+  <li>ZDNet: “<a 
href="https://www.zdnet.com/article/apache-beam-and-spark-new-coopetition-for-squashing-the-lambda-architecture/";>Apache
 Beam and Spark: New coopetition for squashing the Lambda Architecture?</a>” by 
Tony Baer.</li>
   <li>Datanami: “<a 
href="https://www.datanami.com/2017/01/10/google-lauds-outside-influence-apache-beam/";>Google
 Lauds Outside Influence on Apache Beam</a>” by Alex Woodie.</li>
-  <li>InfoWorld / JavaWorld: “<a 
href="http://www.infoworld.com/article/3156598/big-data/apache-beam-unifies-batch-and-streaming-for-big-data.html";>Apache
 Beam unifies batch and streaming for big data</a>” by Serdar Yegulalp, and 
republished in <a 
href="http://www.javaworld.com/article/3156598/big-data/apache-beam-unifies-batch-and-streaming-for-big-data.html";>JavaWorld</a>.</li>
+  <li>InfoWorld / JavaWorld: “<a 
href="https://www.infoworld.com/article/3156598/big-data/apache-beam-unifies-batch-and-streaming-for-big-data.html";>Apache
 Beam unifies batch and streaming for big data</a>” by Serdar Yegulalp, and 
republished in <a 
href="https://www.javaworld.com/article/3156598/big-data/apache-beam-unifies-batch-and-streaming-for-big-data.html";>JavaWorld</a>.</li>
   <li>JAXenter: “<a 
href="https://jaxenter.com/apache-beam-interview-131314.html";>In a way, Apache 
Beam is the glue that connects many big data systems together</a>” by Kypriani 
Sinaris.</li>
   <li>OStatic: “Apache Beam Unifies Batch and Streaming Data Processing” by 
Sam Dean. <!-- 
http://ostatic.com/blog/apache-beam-unifies-batch-and-streaming-data-processing 
--></li>
   <li>Enterprise Apps Today: “<a 
href="http://www.enterpriseappstoday.com/business-intelligence/data-analytics/apache-beam-graduates-to-help-define-streaming-data-processing.html";>Apache
 Beam Graduates to Help Define Streaming Data Processing</a>” by Sean Michael 
Kerner.</li>
-  <li>The Register: “<a 
href="http://www.theregister.co.uk/2017/01/10/google_must_be_ibeamiing_as_apache_announces_its_new_top_level_projects/";>Google
 must be Beaming as Apache announces its new top-level projects</a>” by 
Alexander J. Martin.</li>
-  <li>SiliconANGLE: “<a 
href="http://siliconangle.com/blog/2017/01/11/apache-software-foundation-announces-2-top-level-projects/";>Apache
 Software Foundation announces two more top-level open source projects</a>” by 
Mike Wheatley.</li>
-  <li>SD Times: “<a 
href="http://sdtimes.com/apache-beam-goes-top-level/";>Apache Beam goes top 
level</a>” by Alex Handy.</li>
+  <li>The Register: “<a 
href="https://www.theregister.co.uk/2017/01/10/google_must_be_ibeamiing_as_apache_announces_its_new_top_level_projects/";>Google
 must be Beaming as Apache announces its new top-level projects</a>” by 
Alexander J. Martin.</li>
+  <li>SiliconANGLE: “<a 
href="https://siliconangle.com/blog/2017/01/11/apache-software-foundation-announces-2-top-level-projects/";>Apache
 Software Foundation announces two more top-level open source projects</a>” by 
Mike Wheatley.</li>
+  <li>SD Times: “<a 
href="https://sdtimes.com/apache-beam-goes-top-level/";>Apache Beam goes top 
level</a>” by Alex Handy.</li>
 </ul>
 
 <p>Graduation and media coverage helped push Beam website traffic to record 
levels.
diff --git a/website/generated-content/blog/2017/08/16/splittable-do-fn.html 
b/website/generated-content/blog/2017/08/16/splittable-do-fn.html
index 3323596..4b24ceb 100644
--- a/website/generated-content/blog/2017/08/16/splittable-do-fn.html
+++ b/website/generated-content/blog/2017/08/16/splittable-do-fn.html
@@ -334,7 +334,7 @@ the Source API, and ended up, surprisingly, addressing the 
limitations of
 
 <h2 id="enter-splittable-dofn">Enter Splittable DoFn</h2>
 
-<p><a href="http://s.apache.org/splittable-do-fn";>Splittable DoFn</a> (SDF) is 
a
+<p><a href="https://s.apache.org/splittable-do-fn";>Splittable DoFn</a> (SDF) 
is a
 generalization of <code class="highlighter-rouge">DoFn</code> that gives it 
the core capabilities of <code class="highlighter-rouge">Source</code> while
 retaining <code class="highlighter-rouge">DoFn</code>’s syntax, flexibility, 
modularity, and ease of coding.  As a
 result, it becomes possible to develop more powerful IO connectors than before,
@@ -472,7 +472,7 @@ an element/restriction pair.</p>
 <p>An overwhelming majority of <code class="highlighter-rouge">DoFn</code>s 
found in user pipelines do not need to be
 made splittable: SDF is an advanced, powerful API, primarily targeting authors
 of new IO connectors <em>(though it has interesting non-IO applications as 
well:
-see <a 
href="http://s.apache.org/splittable-do-fn#heading=h.5cep9s8k4fxv";>Non-IO 
examples</a>)</em>.</p>
+see <a 
href="https://s.apache.org/splittable-do-fn#heading=h.5cep9s8k4fxv";>Non-IO 
examples</a>)</em>.</p>
 
 <h3 id="execution-of-a-restriction-and-data-consistency">Execution of a 
restriction and data consistency</h3>
 
@@ -499,7 +499,7 @@ data block, otherwise, it terminates.</p>
 
 <p><img class="center-block" src="/images/blog/splittable-do-fn/blocks.png" 
alt="Processing a restriction by claiming blocks inside it" width="400" /></p>
 
-<p>For more details, see <a 
href="http://s.apache.org/splittable-do-fn#heading=h.vjs7pzbb7kw";>Restrictions, 
blocks and
+<p>For more details, see <a 
href="https://s.apache.org/splittable-do-fn#heading=h.vjs7pzbb7kw";>Restrictions,
 blocks and
 positions</a> in the
 design proposal document.</p>
 
@@ -508,7 +508,7 @@ design proposal document.</p>
 <p>Let us look at some examples of SDF code. The examples use the Beam Java 
SDK,
 which <a 
href="https://github.com/apache/beam/blob/f7e8f886c91ea9d0b51e00331eeb4484e2f6e000/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/DoFn.java#L527";>represents
 splittable
 <code class="highlighter-rouge">DoFn</code>s</a>
-as part of the flexible <a 
href="http://s.apache.org/a-new-dofn";>annotation-based
+as part of the flexible <a 
href="https://s.apache.org/a-new-dofn";>annotation-based
 <code class="highlighter-rouge">DoFn</code></a> machinery, and the <a 
href="https://s.apache.org/splittable-do-fn-python";>proposed SDF syntax
 for Python</a>.</p>
 
@@ -673,8 +673,8 @@ These utility transforms are also independently useful for 
“power user” use
 cases.</p>
 
 <p>To enable more flexible use cases for IOs currently based on the Source 
API, we
-will change them to use SDF. This transition is <a 
href="http://s.apache.org/textio-sdf";>pioneered by
-TextIO</a> and involves temporarily <a 
href="http://s.apache.org/sdf-via-source";>executing SDF
+will change them to use SDF. This transition is <a 
href="https://s.apache.org/textio-sdf";>pioneered by
+TextIO</a> and involves temporarily <a 
href="https://s.apache.org/sdf-via-source";>executing SDF
 via the Source API</a> to support runners
 lacking the ability to run SDF directly.</p>
 
@@ -687,11 +687,11 @@ other parts of the Beam programming model:</p>
 not batch/streaming agnostic (the <code 
class="highlighter-rouge">Source</code> API). This led us to consider use
 cases that cannot be described as purely batch or streaming (for example,
 ingesting a large amount of historical data and carrying on with more data
-arriving in real time) and to develop a <a 
href="http://s.apache.org/beam-fn-api-progress-reporting";>unified notion of 
“progress” and
+arriving in real time) and to develop a <a 
href="https://s.apache.org/beam-fn-api-progress-reporting";>unified notion of 
“progress” and
 “backlog”</a>.</p>
   </li>
   <li>
-    <p>The <a href="http://s.apache.org/beam-fn-api";>Fn API</a> - the 
foundation of Beam’s
+    <p>The <a href="https://s.apache.org/beam-fn-api";>Fn API</a> - the 
foundation of Beam’s
 future support for cross-language pipelines - uses SDF as <em>the only</em> 
concept
 representing data ingestion.</p>
   </li>
diff --git 
a/website/generated-content/blog/2018/08/20/review-input-streaming-connectors.html
 
b/website/generated-content/blog/2018/08/20/review-input-streaming-connectors.html
index cf843af..4fff065 100644
--- 
a/website/generated-content/blog/2018/08/20/review-input-streaming-connectors.html
+++ 
b/website/generated-content/blog/2018/08/20/review-input-streaming-connectors.html
@@ -276,7 +276,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
    </td>
    <td><a 
href="https://beam.apache.org/releases/javadoc/2.19.0/org/apache/beam/sdk/io/gcp/pubsub/PubsubIO.html";>PubsubIO</a>
    </td>
-   <td><a 
href="https://github.com/apache/bahir/tree/master/streaming-pubsub";>spark-streaming-pubsub</a>
 from <a href="http://bahir.apache.org";>Apache Bahir</a>
+   <td><a 
href="https://github.com/apache/bahir/tree/master/streaming-pubsub";>spark-streaming-pubsub</a>
 from <a href="https://bahir.apache.org";>Apache Bahir</a>
    </td>
   </tr>
   <tr>
@@ -295,7 +295,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
 
 <p>Beam has an official <a href="/documentation/sdks/python/">Python SDK</a> 
that currently supports a subset of the streaming features available in the 
Java SDK. Active development is underway to bridge the gap between the 
featuresets in the two SDKs. Currently for Python, the <a 
href="/documentation/runners/direct/">Direct Runner</a> and <a 
href="/documentation/runners/dataflow/">Dataflow Runner</a> are supported, and 
<a href="/documentation/sdks/python-streaming/">several streaming op [...]
 
-<p>Spark also has a Python SDK called <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.html";>PySpark</a>. 
As mentioned earlier, Scala code compiles to a bytecode that is executed by the 
JVM. PySpark uses <a href="https://www.py4j.org/";>Py4J</a>, a library that 
enables Python programs to interact with the JVM and therefore access Java 
libraries, interact with Java objects, and register callbacks from Java. This 
allows PySpark to access native Spark objects like RDDs. Spark  [...]
+<p>Spark also has a Python SDK called <a 
href="https://spark.apache.org/docs/latest/api/python/pyspark.html";>PySpark</a>.
 As mentioned earlier, Scala code compiles to a bytecode that is executed by 
the JVM. PySpark uses <a href="https://www.py4j.org/";>Py4J</a>, a library that 
enables Python programs to interact with the JVM and therefore access Java 
libraries, interact with Java objects, and register callbacks from Java. This 
allows PySpark to access native Spark objects like RDDs. Spark [...]
 
 <p>Below are the main streaming input connectors for available for Beam and 
Spark DStreams in Python:</p>
 
@@ -317,7 +317,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
    </td>
    <td><a 
href="https://beam.apache.org/releases/pydoc/2.19.0/apache_beam.io.textio.html";>io.textio</a>
    </td>
-   <td><a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
+   <td><a 
href="https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
    </td>
   </tr>
   <tr>
@@ -326,7 +326,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
    <td><a 
href="https://beam.apache.org/releases/pydoc/2.19.0/apache_beam.io.hadoopfilesystem.html";>io.hadoopfilesystem</a>
    </td>
    <td><a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html#hadoopConfiguration--";>hadoopConfiguration</a>
 (Access through <code>sc._jsc</code> with Py4J)
-and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
+and <a 
href="https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
    </td>
   </tr>
   <tr>
@@ -336,7 +336,7 @@ and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.ht
    </td>
    <td><a 
href="https://beam.apache.org/releases/pydoc/2.19.0/apache_beam.io.gcp.gcsio.html";>io.gcp.gcsio</a>
    </td>
-   <td rowspan="2"><a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
+   <td rowspan="2"><a 
href="https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
    </td>
   </tr>
   <tr>
@@ -352,7 +352,7 @@ and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.ht
    </td>
    <td>N/A
    </td>
-   <td><a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.kafka.KafkaUtils";>KafkaUtils</a>
+   <td><a 
href="https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.kafka.KafkaUtils";>KafkaUtils</a>
    </td>
   </tr>
   <tr>
@@ -360,7 +360,7 @@ and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.ht
    </td>
    <td>N/A
    </td>
-   <td><a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#module-pyspark.streaming.kinesis";>KinesisUtils</a>
+   <td><a 
href="https://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#module-pyspark.streaming.kinesis";>KinesisUtils</a>
    </td>
   </tr>
   <tr>
diff --git 
a/website/generated-content/blog/2019/05/11/beam-summit-europe-2019.html 
b/website/generated-content/blog/2019/05/11/beam-summit-europe-2019.html
index 3946c99..5347ab7 100644
--- a/website/generated-content/blog/2019/05/11/beam-summit-europe-2019.html
+++ b/website/generated-content/blog/2019/05/11/beam-summit-europe-2019.html
@@ -234,7 +234,7 @@ limitations under the License.
 
 <p>We strongly encourage you to get involved again this year! You can 
participate in the following ways for the upcoming summit in Europe:</p>
 
-<p>🎫 If you want to secure your ticket to attend the Beam Summit Europe 2019, 
check our <a href="http://beam-summit-europe.eventbrite.com";>event page</a>.</p>
+<p>🎫 If you want to secure your ticket to attend the Beam Summit Europe 2019, 
check our <a href="https://beam-summit-europe.eventbrite.com";>event 
page</a>.</p>
 
 <p>💸 If you want to make the Summit even <strong>more</strong> awesome, check 
out our <a 
href="https://drive.google.com/file/d/1R3vvOHihQbpuzF2aaSV8WYg9YHRmJwxS/view";>sponsor
 booklet</a>!</p>
 
diff --git 
a/website/generated-content/blog/2019/06/04/adding-data-sources-to-sql.html 
b/website/generated-content/blog/2019/06/04/adding-data-sources-to-sql.html
index 7770e51..e1c529c 100644
--- a/website/generated-content/blog/2019/06/04/adding-data-sources-to-sql.html
+++ b/website/generated-content/blog/2019/06/04/adding-data-sources-to-sql.html
@@ -198,7 +198,7 @@ in Java pipelines.</p>
 
 <p>Beam also has a fancy new SQL command line that you can use to query your
 data interactively, be it Batch or Streaming. If you haven’t tried it, check 
out
-<a href="http://bit.ly/ExploreBeamSQL";>http://bit.ly/ExploreBeamSQL</a>.</p>
+<a href="https://bit.ly/ExploreBeamSQL";>https://bit.ly/ExploreBeamSQL</a>.</p>
 
 <p>A nice feature of the SQL CLI is that you can use <code 
class="highlighter-rouge">CREATE EXTERNAL TABLE</code>
 commands to <em>add</em> data sources to be accessed in the CLI. Currently, 
the CLI
diff --git a/website/generated-content/blog/index.html 
b/website/generated-content/blog/index.html
index 2de1a69..66ef23b 100644
--- a/website/generated-content/blog/index.html
+++ b/website/generated-content/blog/index.html
@@ -493,7 +493,7 @@ in Java pipelines.</p>
 
 <p>Beam also has a fancy new SQL command line that you can use to query your
 data interactively, be it Batch or Streaming. If you haven’t tried it, check 
out
-<a href="http://bit.ly/ExploreBeamSQL";>http://bit.ly/ExploreBeamSQL</a>.</p>
+<a href="https://bit.ly/ExploreBeamSQL";>https://bit.ly/ExploreBeamSQL</a>.</p>
 
 <p>A nice feature of the SQL CLI is that you can use <code 
class="highlighter-rouge">CREATE EXTERNAL TABLE</code>
 commands to <em>add</em> data sources to be accessed in the CLI. Currently, 
the CLI
@@ -607,7 +607,7 @@ limitations under the License.
 
 <p>We strongly encourage you to get involved again this year! You can 
participate in the following ways for the upcoming summit in Europe:</p>
 
-<p>🎫 If you want to secure your ticket to attend the Beam Summit Europe 2019, 
check our <a href="http://beam-summit-europe.eventbrite.com";>event page</a>.</p>
+<p>🎫 If you want to secure your ticket to attend the Beam Summit Europe 2019, 
check our <a href="https://beam-summit-europe.eventbrite.com";>event 
page</a>.</p>
 
 <p>💸 If you want to make the Summit even <strong>more</strong> awesome, check 
out our <a 
href="https://drive.google.com/file/d/1R3vvOHihQbpuzF2aaSV8WYg9YHRmJwxS/view";>sponsor
 booklet</a>!</p>
 
@@ -1435,7 +1435,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-<p>The latest release 0.4.0 of <a href="/">Apache Beam</a> adds a new runner 
for <a href="http://apex.apache.org/";>Apache Apex</a>. We are excited to reach 
this initial milestone and are looking forward to continued collaboration 
between the Beam and Apex communities to advance the runner.</p>
+<p>The latest release 0.4.0 of <a href="/">Apache Beam</a> adds a new runner 
for <a href="https://apex.apache.org/";>Apache Apex</a>. We are excited to reach 
this initial milestone and are looking forward to continued collaboration 
between the Beam and Apex communities to advance the runner.</p>
 
 <!-- Render a "read more" button if the post is longer than the excerpt -->
 
@@ -1500,7 +1500,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-<p>Tyler Akidau and I gave a <a 
href="http://conferences.oreilly.com/strata/hadoop-big-data-ny/public/schedule/detail/52129";>three-hour
 tutorial</a> on Apache Beam at Strata+Hadoop World 2016. We had a plethora of 
help from our TAs: Kenn Knowles, Reuven Lax, Felipe Hoffa, Slava Chernyak, and 
Jamie Grier. There were a total of 66 people that attended the session.</p>
+<p>Tyler Akidau and I gave a <a 
href="https://conferences.oreilly.com/strata/hadoop-big-data-ny/public/schedule/detail/52129";>three-hour
 tutorial</a> on Apache Beam at Strata+Hadoop World 2016. We had a plethora of 
help from our TAs: Kenn Knowles, Reuven Lax, Felipe Hoffa, Slava Chernyak, and 
Jamie Grier. There were a total of 66 people that attended the session.</p>
 
 <!-- Render a "read more" button if the post is longer than the excerpt -->
 
@@ -1595,7 +1595,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either 
express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
 -->
-<p>We recently achieved a major milestone by adding support for windowing to 
the <a href="http://flink.apache.org";>Apache Flink</a> Batch runner. In this 
post we would like to explain what this means for users of Apache Beam and 
highlight some of the implementation details.</p>
+<p>We recently achieved a major milestone by adding support for windowing to 
the <a href="https://flink.apache.org";>Apache Flink</a> Batch runner. In this 
post we would like to explain what this means for users of Apache Beam and 
highlight some of the implementation details.</p>
 
 <!-- Render a "read more" button if the post is longer than the excerpt -->
 
diff --git a/website/generated-content/community/contact-us/index.html 
b/website/generated-content/community/contact-us/index.html
index ac35880..5b5d94c 100644
--- a/website/generated-content/community/contact-us/index.html
+++ b/website/generated-content/community/contact-us/index.html
@@ -261,7 +261,7 @@ whichever one seems best.</p>
       <td>Report bugs / discover known issues</td>
     </tr>
     <tr>
-      <td><a 
href="http://stackoverflow.com/questions/tagged/apache-beam";>StackOverflow</a></td>
+      <td><a 
href="https://stackoverflow.com/questions/tagged/apache-beam";>StackOverflow</a></td>
       <td>Ask and answer user support questions</td>
     </tr>
     <tr>
@@ -271,7 +271,7 @@ whichever one seems best.</p>
   </tbody>
 </table>
 
-<p>If you have questions about how to use Apache Beam, we recommend you try 
out the <a 
href="https://lists.apache.org/list.html?u...@beam.apache.org";>user@</a> 
mailing list, and <a 
href="http://stackoverflow.com/questions/tagged/apache-beam";>StackOverflow</a>.</p>
+<p>If you have questions about how to use Apache Beam, we recommend you try 
out the <a 
href="https://lists.apache.org/list.html?u...@beam.apache.org";>user@</a> 
mailing list, and <a 
href="https://stackoverflow.com/questions/tagged/apache-beam";>StackOverflow</a>.</p>
 
 <p>If you wish to report a security vulnerability, please contact <a 
href="mailto:secur...@apache.org";>secur...@apache.org</a>. Apache Beam follows 
the typical <a 
href="https://apache.org/security/committers.html#vulnerability-handling";>Apache
 vulnerability handling process</a>.</p>
 <div class="footnotes">
diff --git a/website/generated-content/contribute/committer-guide/index.html 
b/website/generated-content/contribute/committer-guide/index.html
index 85ba76c..64ace92 100644
--- a/website/generated-content/contribute/committer-guide/index.html
+++ b/website/generated-content/contribute/committer-guide/index.html
@@ -311,8 +311,8 @@ spotted, even after the merge happens.</p>
 
 <p>If you are merging a larger contribution, please make sure that the 
contributor
 has an ICLA on file with the Apache Secretary. You can view the list of
-committers <a 
href="http://home.apache.org/phonebook.html?unix=committers";>here</a>, as
-well as <a href="http://home.apache.org/unlistedclas.html";>ICLA-signers who 
aren’t yet
+committers <a 
href="https://home.apache.org/phonebook.html?unix=committers";>here</a>, as
+well as <a href="https://home.apache.org/unlistedclas.html";>ICLA-signers who 
aren’t yet
 committers</a>.</p>
 
 <p>For smaller contributions, however, this is not required. In this case, we 
rely
diff --git 
a/website/generated-content/contribute/ptransform-style-guide/index.html 
b/website/generated-content/contribute/ptransform-style-guide/index.html
index 35808d7..13bcab7 100644
--- a/website/generated-content/contribute/ptransform-style-guide/index.html
+++ b/website/generated-content/contribute/ptransform-style-guide/index.html
@@ -400,7 +400,7 @@ One advantage of putting a parameter into transform 
configuration is, it can be
   <li><strong>Do not expose</strong> tuning knobs, such as batch sizes, 
connection pool sizes, unless it’s impossible to automatically supply or 
compute a good-enough value (i.e., unless you can imagine a reasonable person 
reporting a bug about the absence of this knob).</li>
   <li>When developing a connector to a library that has many parameters, 
<strong>do not mirror each parameter</strong> of the underlying library - if 
necessary, reuse the underlying library’s configuration class and let user 
supply a whole instance. Example: <code class="highlighter-rouge">JdbcIO</code>.
 <em>Exception 1:</em> if some parameters of the underlying library interact 
with Beam semantics non-trivially, then expose them. E.g. when developing a 
connector to a pub/sub system that has a “delivery guarantee” parameter for 
publishers, expose the parameter but prohibit values incompatible with the Beam 
model (at-most-once and exactly-once).
-<em>Exception 2:</em> if the underlying library’s configuration class is 
cumbersome to use - e.g. does not declare a stable API, exposes problematic 
transitive dependencies, or does not obey <a href="http://semver.org/";>semantic 
versioning</a> - in this case, it is better to wrap it and expose a cleaner and 
more stable API to users of the transform.</li>
+<em>Exception 2:</em> if the underlying library’s configuration class is 
cumbersome to use - e.g. does not declare a stable API, exposes problematic 
transitive dependencies, or does not obey <a 
href="https://semver.org/";>semantic versioning</a> - in this case, it is better 
to wrap it and expose a cleaner and more stable API to users of the 
transform.</li>
 </ul>
 
 <h3 id="error-handling">Error handling</h3>
@@ -518,7 +518,7 @@ E.g. when expanding a filepattern into files, log what the 
filepattern was and h
       <li>Common corner cases when developing sources: complicated arithmetic 
in <code class="highlighter-rouge">BoundedSource.split</code> (e.g. splitting 
key or offset ranges), iteration over empty data sources or composite data 
sources that have some empty components.</li>
     </ul>
   </li>
-  <li>Mock out the interactions with third-party systems, or better, use <a 
href="http://martinfowler.com/articles/mocksArentStubs.html";>“fake”</a> 
implementations when available. Make sure that the mocked-out interactions are 
representative of all interesting cases of the actual behavior of these 
systems.</li>
+  <li>Mock out the interactions with third-party systems, or better, use <a 
href="https://martinfowler.com/articles/mocksArentStubs.html";>“fake”</a> 
implementations when available. Make sure that the mocked-out interactions are 
representative of all interesting cases of the actual behavior of these 
systems.</li>
   <li>To unit test <code class="highlighter-rouge">DoFn</code>s, <code 
class="highlighter-rouge">CombineFn</code>s, and <code 
class="highlighter-rouge">BoundedSource</code>s, consider using <code 
class="highlighter-rouge">DoFnTester</code>, <code 
class="highlighter-rouge">CombineFnTester</code>, and <code 
class="highlighter-rouge">SourceTestUtils</code> respectively which can 
exercise the code in non-trivial ways to flesh out potential bugs.</li>
   <li>For transforms that work over unbounded collections, test their behavior 
in the presence of late or out-of-order data using <code 
class="highlighter-rouge">TestStream</code>.</li>
   <li>Tests must pass 100% of the time, including in hostile, CPU- or 
network-constrained environments (continuous integration servers). Never put 
timing-dependent code (e.g. sleeps) into tests. Experience shows that no 
reasonable amount of sleeping is enough - code can be suspended for more than 
several seconds.</li>
@@ -551,10 +551,10 @@ E.g. when expanding a filepattern into files, log what 
the filepattern was and h
 <p>Do:</p>
 
 <ul>
-  <li>Generally, follow the rules of <a href="http://semver.org/";>semantic 
versioning</a>.</li>
+  <li>Generally, follow the rules of <a href="https://semver.org/";>semantic 
versioning</a>.</li>
   <li>If the API of the transform is not yet stable, annotate it as <code 
class="highlighter-rouge">@Experimental</code> (Java) or <code 
class="highlighter-rouge">@experimental</code> (<a 
href="https://beam.apache.org/releases/pydoc/2.19.0/apache_beam.utils.annotations.html";>Python</a>).</li>
   <li>If the API deprecated, annotate it as <code 
class="highlighter-rouge">@Deprecated</code> (Java) or <code 
class="highlighter-rouge">@deprecated</code> (<a 
href="https://beam.apache.org/releases/pydoc/2.19.0/apache_beam.utils.annotations.html";>Python</a>).</li>
-  <li>Pay attention to the stability and versioning of third-party classes 
exposed by the transform’s API: if they are unstable or improperly versioned 
(do not obey <a href="http://semver.org/";>semantic versioning</a>), it is 
better to wrap them in your own classes.</li>
+  <li>Pay attention to the stability and versioning of third-party classes 
exposed by the transform’s API: if they are unstable or improperly versioned 
(do not obey <a href="https://semver.org/";>semantic versioning</a>), it is 
better to wrap them in your own classes.</li>
 </ul>
 
 <p>Do not:</p>
diff --git a/website/generated-content/contribute/release-guide/index.html 
b/website/generated-content/contribute/release-guide/index.html
index 275f47f..2acb38f 100644
--- a/website/generated-content/contribute/release-guide/index.html
+++ b/website/generated-content/contribute/release-guide/index.html
@@ -571,7 +571,7 @@ export GPG_AGENT_INFO
 
 <h4 id="access-to-apache-nexus-repository">Access to Apache Nexus 
repository</h4>
 
-<p>Configure access to the <a href="http://repository.apache.org/";>Apache 
Nexus repository</a>, which enables final deployment of releases to the Maven 
Central Repository.</p>
+<p>Configure access to the <a href="https://repository.apache.org/";>Apache 
Nexus repository</a>, which enables final deployment of releases to the Maven 
Central Repository.</p>
 
 <ol>
   <li>You log in with your Apache account.</li>
@@ -605,7 +605,7 @@ export GPG_AGENT_INFO
 please submit your GPG public key into <a href="http://pgp.mit.edu:11371/";>MIT 
PGP Public Key Server</a>.</p>
 
 <p>If MIT doesn’t work for you (it probably won’t, it’s slow, returns 502 a 
lot, Nexus might error out not being able to find the keys),
-use a keyserver at <code class="highlighter-rouge">ubuntu.com</code> instead: 
http://keyserver.ubuntu.com/.</p>
+use a keyserver at <code class="highlighter-rouge">ubuntu.com</code> instead: 
https://keyserver.ubuntu.com/.</p>
 
 <h4 id="website-development-setup">Website development setup</h4>
 
diff --git 
a/website/generated-content/documentation/dsls/sql/calcite/data-types/index.html
 
b/website/generated-content/documentation/dsls/sql/calcite/data-types/index.html
index 9418008..f9adae1 100644
--- 
a/website/generated-content/documentation/dsls/sql/calcite/data-types/index.html
+++ 
b/website/generated-content/documentation/dsls/sql/calcite/data-types/index.html
@@ -309,7 +309,7 @@ limitations under the License.
 
 <p>Beam SQL supports standard SQL scalar data types as well as extensions
 including arrays, maps, and nested rows. This page documents supported
-<a href="http://calcite.apache.org/docs/reference.html#data-types";>Apache 
Calcite data types</a> supported by Beam Calcite SQL.</p>
+<a href="https://calcite.apache.org/docs/reference.html#data-types";>Apache 
Calcite data types</a> supported by Beam Calcite SQL.</p>
 
 <p>In Java, these types are mapped to Java types large enough to hold the
 full range of values.</p>
diff --git 
a/website/generated-content/documentation/dsls/sql/calcite/lexical/index.html 
b/website/generated-content/documentation/dsls/sql/calcite/lexical/index.html
index 9571a86..a59ff46 100644
--- 
a/website/generated-content/documentation/dsls/sql/calcite/lexical/index.html
+++ 
b/website/generated-content/documentation/dsls/sql/calcite/lexical/index.html
@@ -1336,7 +1336,7 @@ nested comment that renders the query invalid.</p>
 <blockquote>
   <p>Portions of this page are modifications based on work created and
 <a href="https://developers.google.com/terms/site-policies";>shared by 
Google</a>
-and used according to terms described in the <a 
href="http://creativecommons.org/licenses/by/3.0/";>Creative Commons 3.0
+and used according to terms described in the <a 
href="https://creativecommons.org/licenses/by/3.0/";>Creative Commons 3.0
 Attribution License</a>.</p>
 </blockquote>
 
diff --git 
a/website/generated-content/documentation/dsls/sql/calcite/overview/index.html 
b/website/generated-content/documentation/dsls/sql/calcite/overview/index.html
index a93185e..05ca1d9 100644
--- 
a/website/generated-content/documentation/dsls/sql/calcite/overview/index.html
+++ 
b/website/generated-content/documentation/dsls/sql/calcite/overview/index.html
@@ -312,7 +312,7 @@ limitations under the License.
 -->
 <h1 id="beam-calcite-sql-overview">Beam Calcite SQL overview</h1>
 
-<p><a href="http://calcite.apache.org";>Apache Calcite</a> is a widespread SQL 
dialect used in
+<p><a href="https://calcite.apache.org";>Apache Calcite</a> is a widespread SQL 
dialect used in
 big data processing with some streaming enhancements. Beam Calcite SQL is the 
default Beam SQL dialect.</p>
 
 <p>Beam SQL has additional extensions leveraging Beam’s unified 
batch/streaming model and processing complex data types. You can use these 
extensions with all Beam SQL dialects, including Beam Calcite SQL.</p>
@@ -331,34 +331,34 @@ big data processing with some streaming enhancements. 
Beam Calcite SQL is the de
 
 <table class="table-bordered table-striped">
   <tr><th>Operators and functions</th><th>Beam SQL support status</th></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#operator-precedence";>Operator
 precedence</a></td><td>Yes</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#comparison-operators";>Comparison
 operators</a></td><td class="style1">See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#comparison-functions-and-operators">scalar
 functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#logical-operators";>Logical 
operators</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#logical-functions-and-operators">scalar
 functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#arithmetic-operators-and-functions";>Arithmetic
 operators and functions</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#arithmetic-expressions">scalar
 functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#character-string-operators-and-functions";>Character
 string operators and functions</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#string-functions">scalar
 functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#binary-string-operators-and-functions";>Binary
 string operators and functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#datetime-functions";>Date/time
 functions</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#date-functions">scalar 
functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#system-functions";>System 
functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#conditional-functions-and-operators";>Conditional
 functions and operators</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#conditional-functions">scalar
 functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#type-conversion";>Type 
conversion</a></td><td>Yes</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#value-constructors";>Value 
constructors</a></td><td>No, except array</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#collection-functions";>Collection
 functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#period-predicates";>Period 
predicates</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#jdbc-function-escape";>JDBC 
function escape</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#aggregate-functions";>Aggregate
 functions</a></td>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#operator-precedence";>Operator
 precedence</a></td><td>Yes</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#comparison-operators";>Comparison
 operators</a></td><td class="style1">See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#comparison-functions-and-operators">scalar
 functions</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#logical-operators";>Logical 
operators</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#logical-functions-and-operators">scalar
 functions</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#arithmetic-operators-and-functions";>Arithmetic
 operators and functions</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#arithmetic-expressions">scalar
 functions</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#character-string-operators-and-functions";>Character
 string operators and functions</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#string-functions">scalar
 functions</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#binary-string-operators-and-functions";>Binary
 string operators and functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#datetime-functions";>Date/time
 functions</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#date-functions">scalar 
functions</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#system-functions";>System 
functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#conditional-functions-and-operators";>Conditional
 functions and operators</a></td><td>See Beam SQL <a 
href="/documentation/dsls/sql/calcite/scalar-functions/#conditional-functions">scalar
 functions</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#type-conversion";>Type 
conversion</a></td><td>Yes</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#value-constructors";>Value 
constructors</a></td><td>No, except array</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#collection-functions";>Collection
 functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#period-predicates";>Period 
predicates</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#jdbc-function-escape";>JDBC 
function escape</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#aggregate-functions";>Aggregate
 functions</a></td>
 <td>See Beam SQL extension <a 
href="/documentation/dsls/sql/calcite/aggregate-functions/">aggregate 
functions</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#window-functions";>Window 
functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#grouping-functions";>Grouping
 functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#grouped-window-functions";>Grouped
 window functions</a></td><td>See Beam SQL extension <a 
href="/documentation/dsls/sql/windowing-and-triggering/">windowing and 
triggering</a></td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#grouped-auxiliary-functions";>Grouped
 auxiliary functions</a></td><td>Yes, except SESSION_END</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#spatial-functions";>Spatial 
functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#geometry-creation-functions-3d";>Geometry
 creation functions (3D)</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#geometry-predicates";>Geometry
 predicates</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#json-functions";>JSON 
functions</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#user-defined-functions";>User-defined
 functions</a></td>
-<td>See Beam SQL extension <a 
href="/documentation/dsls/sql/user-defined-functions/">user-defined 
functions</a>. You cannot call functions with <a 
href="http://calcite.apache.org/docs/reference.html#calling-functions-with-named-and-optional-parameters";>named
 and optional parameters</a>.</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#match_recognize";>MATCH_RECOGNIZE</a></td><td>No</td></tr>
-<tr><td><a 
href="http://calcite.apache.org/docs/reference.html#ddl-extensions";>DDL 
Extensions</a></td><td>See Beam SQL extension <a 
href="/documentation/dsls/sql/create-external-table/">CREATE EXTERNAL 
TABLE</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#window-functions";>Window 
functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#grouping-functions";>Grouping
 functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#grouped-window-functions";>Grouped
 window functions</a></td><td>See Beam SQL extension <a 
href="/documentation/dsls/sql/windowing-and-triggering/">windowing and 
triggering</a></td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#grouped-auxiliary-functions";>Grouped
 auxiliary functions</a></td><td>Yes, except SESSION_END</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#spatial-functions";>Spatial 
functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#geometry-creation-functions-3d";>Geometry
 creation functions (3D)</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#geometry-predicates";>Geometry
 predicates</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#json-functions";>JSON 
functions</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#user-defined-functions";>User-defined
 functions</a></td>
+<td>See Beam SQL extension <a 
href="/documentation/dsls/sql/user-defined-functions/">user-defined 
functions</a>. You cannot call functions with <a 
href="https://calcite.apache.org/docs/reference.html#calling-functions-with-named-and-optional-parameters";>named
 and optional parameters</a>.</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#match_recognize";>MATCH_RECOGNIZE</a></td><td>No</td></tr>
+<tr><td><a 
href="https://calcite.apache.org/docs/reference.html#ddl-extensions";>DDL 
Extensions</a></td><td>See Beam SQL extension <a 
href="/documentation/dsls/sql/create-external-table/">CREATE EXTERNAL 
TABLE</a></td></tr>
 </table>
 
       </div>
diff --git 
a/website/generated-content/documentation/dsls/sql/calcite/query-syntax/index.html
 
b/website/generated-content/documentation/dsls/sql/calcite/query-syntax/index.html
index dff0194..2c1c014 100644
--- 
a/website/generated-content/documentation/dsls/sql/calcite/query-syntax/index.html
+++ 
b/website/generated-content/documentation/dsls/sql/calcite/query-syntax/index.html
@@ -390,7 +390,7 @@ batch/streaming model:</p>
 
 <p>The main functionality of Beam SQL is the <code 
class="highlighter-rouge">SELECT</code> statement. This is how you
 query and join data. The operations supported are a subset of
-<a href="http://calcite.apache.org/docs/reference.html#grammar";>Apache Calcite 
SQL</a>.</p>
+<a href="https://calcite.apache.org/docs/reference.html#grammar";>Apache 
Calcite SQL</a>.</p>
 
 <h2 id="sql-syntax">SQL Syntax</h2>
 
@@ -1083,7 +1083,7 @@ example, <code class="highlighter-rouge">FROM 
abc.def.ghi</code> implies <code c
 <a 
href="https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax";>work</a>
 created and
 <a href="https://developers.google.com/terms/site-policies";>shared by 
Google</a>
-and used according to terms described in the <a 
href="http://creativecommons.org/licenses/by/3.0/";>Creative Commons 3.0
+and used according to terms described in the <a 
href="https://creativecommons.org/licenses/by/3.0/";>Creative Commons 3.0
 Attribution License</a>.</p>
 </blockquote>
 
diff --git 
a/website/generated-content/documentation/dsls/sql/overview/index.html 
b/website/generated-content/documentation/dsls/sql/overview/index.html
index ab07a5f..6d2c3b9 100644
--- a/website/generated-content/documentation/dsls/sql/overview/index.html
+++ b/website/generated-content/documentation/dsls/sql/overview/index.html
@@ -322,7 +322,7 @@ You can freely mix SQL <code 
class="highlighter-rouge">PTransforms</code> and ot
 <p>Beam SQL includes the following dialects:</p>
 
 <ul>
-  <li><a href="http://calcite.apache.org";>Beam Calcite SQL</a></li>
+  <li><a href="https://calcite.apache.org";>Beam Calcite SQL</a></li>
   <li><a href="https://github.com/google/zetasql";>Beam ZetaSQL</a></li>
 </ul>
 
diff --git 
a/website/generated-content/documentation/dsls/sql/zetasql/data-types/index.html
 
b/website/generated-content/documentation/dsls/sql/zetasql/data-types/index.html
index e5597e8..a384e09 100644
--- 
a/website/generated-content/documentation/dsls/sql/zetasql/data-types/index.html
+++ 
b/website/generated-content/documentation/dsls/sql/zetasql/data-types/index.html
@@ -567,7 +567,7 @@ explicitly specified, the default time zone, UTC, is 
used.</p>
 <p>Time zones are represented by strings in one of these two canonical 
formats:</p>
 <ul>
 <li>Offset from Coordinated Universal Time (UTC), or the letter <code>Z</code> 
for UTC</li>
-<li>Time zone name from the <a href="http://www.iana.org/time-zones";>tz 
database</a></li>
+<li>Time zone name from the <a href="https://www.iana.org/time-zones";>tz 
database</a></li>
 </ul>
 <h4 id="offset-from-coordinated-universal-time-utc">Offset from Coordinated 
Universal Time (UTC)</h4>
 <h5 id="offset-format">Offset Format</h5>
@@ -585,9 +585,9 @@ of the timestamp.</p>
 <pre class="codehilite"><code>2014-09-27 12:30:00.45-8:00
 2014-09-27T12:30:00.45Z</code></pre>
 <h4 id="time-zone-name">Time zone name</h4>
-<p>Time zone names are from the <a href="http://www.iana.org/time-zones";>tz 
database</a>. For a
+<p>Time zone names are from the <a href="https://www.iana.org/time-zones";>tz 
database</a>. For a
 less comprehensive but simpler reference, see the
-<a href="http://en.wikipedia.org/wiki/List_of_tz_database_time_zones";>List of 
tz database time zones</a>
+<a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones";>List of 
tz database time zones</a>
 on Wikipedia.</p>
 <h5 id="format">Format</h5>
 <pre class="codehilite"><code>continent/[region/]city</code></pre>
diff --git 
a/website/generated-content/documentation/dsls/sql/zetasql/lexical/index.html 
b/website/generated-content/documentation/dsls/sql/zetasql/lexical/index.html
index 217696e..446d5e2 100644
--- 
a/website/generated-content/documentation/dsls/sql/zetasql/lexical/index.html
+++ 
b/website/generated-content/documentation/dsls/sql/zetasql/lexical/index.html
@@ -627,9 +627,9 @@ represents the offset from Coordinated Universal Time 
(UTC).</p>
 '-7'</code></pre>
 
 <p>Timezones can also be expressed using string timezone names from the
-<a href="http://www.iana.org/time-zones";>tz database</a>. For a less 
comprehensive but
+<a href="https://www.iana.org/time-zones";>tz database</a>. For a less 
comprehensive but
 simpler reference, see the
-<a href="http://en.wikipedia.org/wiki/List_of_tz_database_time_zones";>List of 
tz database timezones</a>
+<a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones";>List of 
tz database timezones</a>
 on Wikipedia. Canonical timezone names have the format
 <code>&lt;continent/[region/]city&gt;</code>, such as 
<code>America/Los_Angeles</code>.</p>
 
diff --git a/website/generated-content/documentation/index.html 
b/website/generated-content/documentation/index.html
index f5a8332..d32137f 100644
--- a/website/generated-content/documentation/index.html
+++ b/website/generated-content/documentation/index.html
@@ -542,13 +542,13 @@ limitations under the License.
 
 <ul>
   <li><a href="/documentation/runners/direct/">DirectRunner</a>: Runs locally 
on your machine – great for developing, testing, and debugging.</li>
-  <li><a href="/documentation/runners/apex/">ApexRunner</a>: Runs on <a 
href="http://apex.apache.org";>Apache Apex</a>.</li>
-  <li><a href="/documentation/runners/flink/">FlinkRunner</a>: Runs on <a 
href="http://flink.apache.org";>Apache Flink</a>.</li>
-  <li><a href="/documentation/runners/spark/">SparkRunner</a>: Runs on <a 
href="http://spark.apache.org";>Apache Spark</a>.</li>
+  <li><a href="/documentation/runners/apex/">ApexRunner</a>: Runs on <a 
href="https://apex.apache.org";>Apache Apex</a>.</li>
+  <li><a href="/documentation/runners/flink/">FlinkRunner</a>: Runs on <a 
href="https://flink.apache.org";>Apache Flink</a>.</li>
+  <li><a href="/documentation/runners/spark/">SparkRunner</a>: Runs on <a 
href="https://spark.apache.org";>Apache Spark</a>.</li>
   <li><a href="/documentation/runners/dataflow/">DataflowRunner</a>: Runs on 
<a href="https://cloud.google.com/dataflow";>Google Cloud Dataflow</a>, a fully 
managed service within <a href="https://cloud.google.com/";>Google Cloud 
Platform</a>.</li>
-  <li><a href="/documentation/runners/gearpump/">GearpumpRunner</a>: Runs on 
<a href="http://gearpump.apache.org";>Apache Gearpump (incubating)</a>.</li>
-  <li><a href="/documentation/runners/samza/">SamzaRunner</a>: Runs on <a 
href="http://samza.apache.org";>Apache Samza</a>.</li>
-  <li><a href="/documentation/runners/nemo/">NemoRunner</a>: Runs on <a 
href="http://nemo.apache.org";>Apache Nemo</a>.</li>
+  <li><a href="/documentation/runners/gearpump/">GearpumpRunner</a>: Runs on 
<a href="https://gearpump.apache.org";>Apache Gearpump (incubating)</a>.</li>
+  <li><a href="/documentation/runners/samza/">SamzaRunner</a>: Runs on <a 
href="https://samza.apache.org";>Apache Samza</a>.</li>
+  <li><a href="/documentation/runners/nemo/">NemoRunner</a>: Runs on <a 
href="https://nemo.apache.org";>Apache Nemo</a>.</li>
   <li><a href="/documentation/runners/jet/">JetRunner</a>: Runs on <a 
href="https://jet.hazelcast.org/";>Hazelcast Jet</a>.</li>
 </ul>
 
diff --git 
a/website/generated-content/documentation/io/built-in/google-bigquery/index.html
 
b/website/generated-content/documentation/io/built-in/google-bigquery/index.html
index a1f3efb..d58d1a2 100644
--- 
a/website/generated-content/documentation/io/built-in/google-bigquery/index.html
+++ 
b/website/generated-content/documentation/io/built-in/google-bigquery/index.html
@@ -1564,7 +1564,7 @@ that has a mean temp smaller than the derived global 
mean.</p>
   </li>
   <li>
     <p><a 
href="https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/cookbook/JoinExamples.java";>JoinExamples</a>
-reads a sample of the <a href="http://goo.gl/OB6oin";>GDELT “world event”</a> 
from
+reads a sample of the <a href="https://goo.gl/OB6oin";>GDELT “world event”</a> 
from
 BigQuery and joins the event <code class="highlighter-rouge">action</code> 
country code against a table that maps
 country codes to country names.</p>
   </li>
diff --git a/website/generated-content/documentation/io/testing/index.html 
b/website/generated-content/documentation/io/testing/index.html
index 41f714b..a9f86ff 100644
--- a/website/generated-content/documentation/io/testing/index.html
+++ b/website/generated-content/documentation/io/testing/index.html
@@ -946,7 +946,7 @@ If you modified/added new Jenkins job definitions in your 
Pull Request, run the
         <ol>
           <li>An image provided by the creator of the data source/sink (if 
they officially maintain it). For Apache projects, this would be the official 
Apache repository.</li>
           <li>Official Docker images, because they have security fixes and 
guaranteed maintenance.</li>
-          <li>Non-official Docker images, or images from other providers that 
have good maintainers (e.g. <a href="http://quay.io/";>quay.io</a>).</li>
+          <li>Non-official Docker images, or images from other providers that 
have good maintainers (e.g. <a href="https://quay.io/";>quay.io</a>).</li>
         </ol>
       </li>
     </ul>
diff --git 
a/website/generated-content/documentation/pipelines/test-your-pipeline/index.html
 
b/website/generated-content/documentation/pipelines/test-your-pipeline/index.html
index 1649477..1499b90 100644
--- 
a/website/generated-content/documentation/pipelines/test-your-pipeline/index.html
+++ 
b/website/generated-content/documentation/pipelines/test-your-pipeline/index.html
@@ -562,7 +562,7 @@ limitations under the License.
 
 <p>The Beam SDK for Java provides a convenient way to test an individual <code 
class="highlighter-rouge">DoFn</code> called <a 
href="https://github.com/apache/beam/blob/master/sdks/java/core/src/test/java/org/apache/beam/sdk/transforms/DoFnTesterTest.java";>DoFnTester</a>,
 which is included in the SDK <code class="highlighter-rouge">Transforms</code> 
package.</p>
 
-<p><code class="highlighter-rouge">DoFnTester</code>uses the <a 
href="http://junit.org";>JUnit</a> framework. To use <code 
class="highlighter-rouge">DoFnTester</code>, you’ll need to do the 
following:</p>
+<p><code class="highlighter-rouge">DoFnTester</code>uses the <a 
href="https://junit.org";>JUnit</a> framework. To use <code 
class="highlighter-rouge">DoFnTester</code>, you’ll need to do the 
following:</p>
 
 <ol>
   <li>Create a <code class="highlighter-rouge">DoFnTester</code>. You’ll need 
to pass an instance of the <code class="highlighter-rouge">DoFn</code> you want 
to test to the static factory method for <code 
class="highlighter-rouge">DoFnTester</code>.</li>
diff --git 
a/website/generated-content/documentation/resources/learning-resources/index.html
 
b/website/generated-content/documentation/resources/learning-resources/index.html
index 728ba29..d118bb3 100644
--- 
a/website/generated-content/documentation/resources/learning-resources/index.html
+++ 
b/website/generated-content/documentation/resources/learning-resources/index.html
@@ -634,7 +634,7 @@ limitations under the License.
 <h3 id="advanced-concepts">Advanced Concepts</h3>
 
 <ul>
-  <li><strong><a 
href="http://amygdala.github.io/dataflow/app_engine/2017/10/24/gae_dataflow.html";>Running
 on AppEngine</a></strong> - Use a Dataflow template to launch a pipeline from 
Google AppEngine, and how to run the pipeline periodically via a cron job.</li>
+  <li><strong><a 
href="https://amygdala.github.io/dataflow/app_engine/2017/10/24/gae_dataflow.html";>Running
 on AppEngine</a></strong> - Use a Dataflow template to launch a pipeline from 
Google AppEngine, and how to run the pipeline periodically via a cron job.</li>
   <li><strong><a 
href="https://beam.apache.org/blog/2017/02/13/stateful-processing.html";>Stateful
 Processing</a></strong> - Learn how to access a persistent mutable state while 
processing input elements, this allows for <em>side effects</em> in a <code 
class="highlighter-rouge">DoFn</code>. This can be used for 
arbitrary-but-consistent index assignment, if you want to assign a unique 
incrementing index to each incoming element where order doesn’t matter.</li>
   <li><strong><a 
href="https://beam.apache.org/blog/2017/08/28/timely-processing.html";>Timely 
and Stateful Processing</a></strong> - An example on how to do batched RPC 
calls. The call requests are stored in a mutable state as they are received. 
Once there are either enough requests or a certain time has passed, the batch 
of requests is triggered to be sent.</li>
   <li><strong><a 
href="https://cloud.google.com/blog/products/gcp/running-external-libraries-with-cloud-dataflow-for-grid-computing-workloads";>Running
 External Libraries</a></strong> - Call an external library written in a 
language that does not have a native SDK in Apache Beam such as C++.</li>
diff --git a/website/generated-content/documentation/runners/apex/index.html 
b/website/generated-content/documentation/runners/apex/index.html
index 8872d38..d526892 100644
--- a/website/generated-content/documentation/runners/apex/index.html
+++ b/website/generated-content/documentation/runners/apex/index.html
@@ -225,9 +225,9 @@ limitations under the License.
 -->
 <h1 id="using-the-apache-apex-runner">Using the Apache Apex Runner</h1>
 
-<p>The Apex Runner executes Apache Beam pipelines using <a 
href="http://apex.apache.org/";>Apache Apex</a> as an underlying engine. The 
runner has broad support for the <a 
href="/documentation/runners/capability-matrix/">Beam model and supports 
streaming and batch pipelines</a>.</p>
+<p>The Apex Runner executes Apache Beam pipelines using <a 
href="https://apex.apache.org/";>Apache Apex</a> as an underlying engine. The 
runner has broad support for the <a 
href="/documentation/runners/capability-matrix/">Beam model and supports 
streaming and batch pipelines</a>.</p>
 
-<p><a href="http://apex.apache.org/";>Apache Apex</a> is a stream processing 
platform and framework for low-latency, high-throughput and fault-tolerant 
analytics applications on Apache Hadoop. Apex has a unified streaming 
architecture and can be used for real-time and batch processing.</p>
+<p><a href="https://apex.apache.org/";>Apache Apex</a> is a stream processing 
platform and framework for low-latency, high-throughput and fault-tolerant 
analytics applications on Apache Hadoop. Apex has a unified streaming 
architecture and can be used for real-time and batch processing.</p>
 
 <p>The following instructions are for running Beam pipelines with Apex on a 
YARN cluster.
 They are not required for Apex in embedded mode (see <a 
href="/get-started/quickstart-java/">quickstart</a>).</p>
@@ -236,9 +236,9 @@ They are not required for Apex in embedded mode (see <a 
href="/get-started/quick
 
 <p>You may set up your own Hadoop cluster. Beam does not require anything 
extra to launch the pipelines on YARN.
 An optional Apex installation may be useful for monitoring and troubleshooting.
-The Apex CLI can be <a 
href="http://apex.apache.org/docs/apex/apex_development_setup/";>built</a> or
+The Apex CLI can be <a 
href="https://apex.apache.org/docs/apex/apex_development_setup/";>built</a> or
 obtained as binary build.
-For more download options see <a 
href="http://apex.apache.org/downloads.html";>distribution information on the 
Apache Apex website</a>.</p>
+For more download options see <a 
href="https://apex.apache.org/downloads.html";>distribution information on the 
Apache Apex website</a>.</p>
 
 <h2 id="running-wordcount-with-apex">Running wordcount with Apex</h2>
 
@@ -274,7 +274,7 @@ it is necessary to augment the build to include the 
respective file system provi
 
 <ul>
   <li>YARN : Using YARN web UI generally running on 8088 on the node running 
resource manager.</li>
-  <li>Apex command-line interface: <a 
href="http://apex.apache.org/docs/apex/apex_cli/#apex-cli-commands";>Using the 
Apex CLI to get running application information</a>.</li>
+  <li>Apex command-line interface: <a 
href="https://apex.apache.org/docs/apex/apex_cli/#apex-cli-commands";>Using the 
Apex CLI to get running application information</a>.</li>
 </ul>
 
 <p>Check the output of the pipeline:</p>
diff --git 
a/website/generated-content/documentation/runners/capability-matrix/index.html 
b/website/generated-content/documentation/runners/capability-matrix/index.html
index 16a9ab0..8364d89 100644
--- 
a/website/generated-content/documentation/runners/capability-matrix/index.html
+++ 
b/website/generated-content/documentation/runners/capability-matrix/index.html
@@ -220,7 +220,7 @@ limitations under the License.
 -->
 
 <h1 id="beam-capability-matrix">Beam Capability Matrix</h1>
-<p>Apache Beam provides a portable API layer for building sophisticated 
data-parallel processing pipelines that may be executed across a diversity of 
execution engines, or <i>runners</i>. The core concepts of this layer are based 
upon the Beam Model (formerly referred to as the <a 
href="http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf";>Dataflow Model</a>), and 
implemented to varying degrees in each Beam runner. To help clarify the 
capabilities of individual runners, we’ve created the capa [...]
+<p>Apache Beam provides a portable API layer for building sophisticated 
data-parallel processing pipelines that may be executed across a diversity of 
execution engines, or <i>runners</i>. The core concepts of this layer are based 
upon the Beam Model (formerly referred to as the <a 
href="https://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf";>Dataflow Model</a>), 
and implemented to varying degrees in each Beam runner. To help clarify the 
capabilities of individual runners, we’ve created the cap [...]
 
 <p>Individual capabilities have been grouped by their corresponding <span 
class="wwwh-what-dark">What</span> / <span class="wwwh-where-dark">Where</span> 
/ <span class="wwwh-when-dark">When</span> / <span 
class="wwwh-how-dark">How</span> question:</p>
 
@@ -231,7 +231,7 @@ limitations under the License.
   <li><span class="wwwh-how-dark">How</span> do refinements of results 
relate?</li>
 </ul>
 
-<p>For more details on the <span class="wwwh-what-dark">What</span> / <span 
class="wwwh-where-dark">Where</span> / <span class="wwwh-when-dark">When</span> 
/ <span class="wwwh-how-dark">How</span> breakdown of concepts, we recommend 
reading through the <a 
href="http://oreilly.com/ideas/the-world-beyond-batch-streaming-102";>Streaming 
102</a> post on O’Reilly Radar.</p>
+<p>For more details on the <span class="wwwh-what-dark">What</span> / <span 
class="wwwh-where-dark">Where</span> / <span class="wwwh-when-dark">When</span> 
/ <span class="wwwh-how-dark">How</span> breakdown of concepts, we recommend 
reading through the <a 
href="https://oreilly.com/ideas/the-world-beyond-batch-streaming-102";>Streaming 
102</a> post on O’Reilly Radar.</p>
 
 <p>Note that in the future, we intend to add additional tables beyond the 
current set, for things like runtime characterstics (e.g. at-least-once vs 
exactly-once), performance, etc.</p>
 
diff --git 
a/website/generated-content/documentation/runners/gearpump/index.html 
b/website/generated-content/documentation/runners/gearpump/index.html
index b94f80d..e905737 100644
--- a/website/generated-content/documentation/runners/gearpump/index.html
+++ b/website/generated-content/documentation/runners/gearpump/index.html
@@ -241,7 +241,7 @@ When you are running your pipeline with Gearpump Runner you 
just need to create
 <p>The <a href="/documentation/runners/capability-matrix/">Beam Capability 
Matrix</a> documents the currently supported capabilities of the Gearpump 
Runner.</p>
 
 <h2 id="writing-beam-pipeline-with-gearpump-runner">Writing Beam Pipeline with 
Gearpump Runner</h2>
-<p>To use the Gearpump Runner in a distributed mode, you have to setup a 
Gearpump cluster first by following the Gearpump <a 
href="http://gearpump.apache.org/releases/latest/deployment/deployment-standalone/index.html";>setup
 quickstart</a>.</p>
+<p>To use the Gearpump Runner in a distributed mode, you have to setup a 
Gearpump cluster first by following the Gearpump <a 
href="https://gearpump.apache.org/releases/latest/deployment/deployment-standalone/index.html";>setup
 quickstart</a>.</p>
 
 <p>Suppose you are writing a Beam pipeline, you can add a dependency on the 
latest version of the Gearpump runner by adding to your pom.xml to enable 
Gearpump runner.
 And your Beam application should also pack Beam SDK explicitly and here is a 
snippet of example pom.xml:</p>
@@ -318,7 +318,7 @@ And your Beam application should also pack Beam SDK 
explicitly and here is a sni
 </code></pre></div></div>
 
 <h2 id="monitoring-your-application">Monitoring your application</h2>
-<p>You can monitor a running Gearpump application using Gearpump’s Dashboard. 
Please follow the Gearpump <a 
href="http://gearpump.apache.org/releases/latest/deployment/deployment-standalone/index.html#start-ui";>Start
 UI</a> to start the dashboard.</p>
+<p>You can monitor a running Gearpump application using Gearpump’s Dashboard. 
Please follow the Gearpump <a 
href="https://gearpump.apache.org/releases/latest/deployment/deployment-standalone/index.html#start-ui";>Start
 UI</a> to start the dashboard.</p>
 
 <h2 id="pipeline-options-for-the-gearpump-runner">Pipeline options for the 
Gearpump Runner</h2>
 
diff --git 
a/website/generated-content/documentation/runners/mapreduce/index.html 
b/website/generated-content/documentation/runners/mapreduce/index.html
index 337a508..2f69a4a 100644
--- a/website/generated-content/documentation/runners/mapreduce/index.html
+++ b/website/generated-content/documentation/runners/mapreduce/index.html
@@ -225,7 +225,7 @@ limitations under the License.
 -->
 <h1 id="using-the-apache-hadoop-mapreduce-runner">Using the Apache Hadoop 
MapReduce Runner</h1>
 
-<p>The Apache Hadoop MapReduce Runner can be used to execute Beam pipelines 
using <a href="http://hadoop.apache.org/";>Apache Hadoop</a>.</p>
+<p>The Apache Hadoop MapReduce Runner can be used to execute Beam pipelines 
using <a href="https://hadoop.apache.org/";>Apache Hadoop</a>.</p>
 
 <p>The <a href="/documentation/runners/capability-matrix/">Beam Capability 
Matrix</a> documents the currently supported capabilities of the Apache Hadoop 
MapReduce Runner.</p>
 
diff --git a/website/generated-content/documentation/runners/nemo/index.html 
b/website/generated-content/documentation/runners/nemo/index.html
index 1337c1a..61cd25f 100644
--- a/website/generated-content/documentation/runners/nemo/index.html
+++ b/website/generated-content/documentation/runners/nemo/index.html
@@ -232,7 +232,7 @@ limitations under the License.
 -->
 <h1 id="using-the-apache-nemo-runner">Using the Apache Nemo Runner</h1>
 
-<p>The Apache Nemo Runner can be used to execute Beam pipelines using <a 
href="http://nemo.apache.org";>Apache Nemo</a>.
+<p>The Apache Nemo Runner can be used to execute Beam pipelines using <a 
href="https://nemo.apache.org";>Apache Nemo</a>.
 The Nemo Runner can optimize Beam pipelines with the Nemo compiler through 
various optimization passes
 and execute them in a distributed fashion using the Nemo runtime. You can also 
deploy a self-contained application
 for local mode or run using resource managers like YARN or Mesos.</p>
diff --git a/website/generated-content/documentation/runners/samza/index.html 
b/website/generated-content/documentation/runners/samza/index.html
index 2cf6582..3e094cc 100644
--- a/website/generated-content/documentation/runners/samza/index.html
+++ b/website/generated-content/documentation/runners/samza/index.html
@@ -231,7 +231,7 @@ limitations under the License.
 
 <h1 id="using-the-apache-samza-runner">Using the Apache Samza Runner</h1>
 
-<p>The Apache Samza Runner can be used to execute Beam pipelines using <a 
href="http://samza.apache.org/";>Apache Samza</a>. The Samza Runner executes 
Beam pipeline in a Samza application and can run locally. The application can 
further be built into a .tgz file, and deployed to a YARN cluster or Samza 
standalone cluster with Zookeeper.</p>
+<p>The Apache Samza Runner can be used to execute Beam pipelines using <a 
href="https://samza.apache.org/";>Apache Samza</a>. The Samza Runner executes 
Beam pipeline in a Samza application and can run locally. The application can 
further be built into a .tgz file, and deployed to a YARN cluster or Samza 
standalone cluster with Zookeeper.</p>
 
 <p>The Samza Runner and Samza are suitable for large scale, stateful streaming 
jobs, and provide:</p>
 
@@ -369,7 +369,7 @@ job.default.system=${job_default_system}
 
 <h2 id="monitoring-your-job">Monitoring your job</h2>
 
-<p>You can monitor your pipeline job using metrics emitted from both Beam and 
Samza, e.g. Beam source metrics such as <code 
class="highlighter-rouge">elements_read</code> and <code 
class="highlighter-rouge">backlog_elements</code>, and Samza job metrics such 
as <code class="highlighter-rouge">job-healthy</code> and <code 
class="highlighter-rouge">process-envelopes</code>. A complete list of Samza 
metrics is in <a 
href="https://samza.apache.org/learn/documentation/latest/container/metrics 
[...]
+<p>You can monitor your pipeline job using metrics emitted from both Beam and 
Samza, e.g. Beam source metrics such as <code 
class="highlighter-rouge">elements_read</code> and <code 
class="highlighter-rouge">backlog_elements</code>, and Samza job metrics such 
as <code class="highlighter-rouge">job-healthy</code> and <code 
class="highlighter-rouge">process-envelopes</code>. A complete list of Samza 
metrics is in <a 
href="https://samza.apache.org/learn/documentation/latest/container/metrics 
[...]
 
 <p>For a running Samza YARN job, you can use YARN web UI to monitor the job 
status and check logs.</p>
 
diff --git a/website/generated-content/documentation/runners/spark/index.html 
b/website/generated-content/documentation/runners/spark/index.html
index b8eac5b..4f63b99 100644
--- a/website/generated-content/documentation/runners/spark/index.html
+++ b/website/generated-content/documentation/runners/spark/index.html
@@ -239,15 +239,15 @@ limitations under the License.
 -->
 <h1 id="using-the-apache-spark-runner">Using the Apache Spark Runner</h1>
 
-<p>The Apache Spark Runner can be used to execute Beam pipelines using <a 
href="http://spark.apache.org/";>Apache Spark</a>.
+<p>The Apache Spark Runner can be used to execute Beam pipelines using <a 
href="https://spark.apache.org/";>Apache Spark</a>.
 The Spark Runner can execute Spark pipelines just like a native Spark 
application; deploying a self-contained application for local mode, running on 
Spark’s Standalone RM, or using YARN or Mesos.</p>
 
 <p>The Spark Runner executes Beam pipelines on top of Apache Spark, 
providing:</p>
 
 <ul>
   <li>Batch and streaming (and combined) pipelines.</li>
-  <li>The same fault-tolerance <a 
href="http://spark.apache.org/docs/latest/streaming-programming-guide.html#fault-tolerance-semantics";>guarantees</a>
 as provided by RDDs and DStreams.</li>
-  <li>The same <a 
href="http://spark.apache.org/docs/latest/security.html";>security</a> features 
Spark provides.</li>
+  <li>The same fault-tolerance <a 
href="https://spark.apache.org/docs/latest/streaming-programming-guide.html#fault-tolerance-semantics";>guarantees</a>
 as provided by RDDs and DStreams.</li>
+  <li>The same <a 
href="https://spark.apache.org/docs/latest/security.html";>security</a> features 
Spark provides.</li>
   <li>Built-in metrics reporting using Spark’s metrics system, which reports 
Beam Aggregators as well.</li>
   <li>Native support for Beam side-inputs via spark’s Broadcast variables.</li>
 </ul>
@@ -431,7 +431,7 @@ provided with the Spark master address.
 <h3 id="running-on-a-pre-deployed-spark-cluster">Running on a pre-deployed 
Spark cluster</h3>
 
 <p>Deploying your Beam pipeline on a cluster that already has a Spark 
deployment (Spark classes are available in container classpath) does not 
require any additional dependencies.
-For more details on the different deployment modes see: <a 
href="http://spark.apache.org/docs/latest/spark-standalone.html";>Standalone</a>,
 <a href="http://spark.apache.org/docs/latest/running-on-yarn.html";>YARN</a>, 
or <a 
href="http://spark.apache.org/docs/latest/running-on-mesos.html";>Mesos</a>.</p>
+For more details on the different deployment modes see: <a 
href="https://spark.apache.org/docs/latest/spark-standalone.html";>Standalone</a>,
 <a href="https://spark.apache.org/docs/latest/running-on-yarn.html";>YARN</a>, 
or <a 
href="http://spark.apache.org/docs/latest/running-on-mesos.html";>Mesos</a>.</p>
 
 <p><span class="language-py">1. Start a Spark cluster which exposes the master 
on port 7077 by default.</span></p>
 
@@ -563,15 +563,15 @@ See <a 
href="/documentation/runtime/sdk-harness-config/">here</a> for details.
 <p>When submitting a Spark application to cluster, it is common (and 
recommended) to use the <code>spark-submit</code> script that is provided with 
the spark installation.
 The <code>PipelineOptions</code> described above are not to replace 
<code>spark-submit</code>, but to complement it.
 Passing any of the above mentioned options could be done as one of the 
<code>application-arguments</code>, and setting <code>--master</code> takes 
precedence.
-For more on how to generally use <code>spark-submit</code> checkout Spark <a 
href="http://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit";>documentation</a>.</p>
+For more on how to generally use <code>spark-submit</code> checkout Spark <a 
href="https://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit";>documentation</a>.</p>
 
 <h3 id="monitoring-your-job">Monitoring your job</h3>
 
-<p>You can monitor a running Spark job using the Spark <a 
href="http://spark.apache.org/docs/latest/monitoring.html#web-interfaces";>Web 
Interfaces</a>. By default, this is available at port <code 
class="highlighter-rouge">4040</code> on the driver node. If you run Spark on 
your local machine that would be <code 
class="highlighter-rouge">http://localhost:4040</code>.
-Spark also has a history server to <a 
href="http://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact";>view
 after the fact</a>.
+<p>You can monitor a running Spark job using the Spark <a 
href="https://spark.apache.org/docs/latest/monitoring.html#web-interfaces";>Web 
Interfaces</a>. By default, this is available at port <code 
class="highlighter-rouge">4040</code> on the driver node. If you run Spark on 
your local machine that would be <code 
class="highlighter-rouge">http://localhost:4040</code>.
+Spark also has a history server to <a 
href="https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact";>view
 after the fact</a>.
 <span class="language-java">
-Metrics are also available via <a 
href="http://spark.apache.org/docs/latest/monitoring.html#rest-api";>REST 
API</a>.
-Spark provides a <a 
href="http://spark.apache.org/docs/latest/monitoring.html#metrics";>metrics 
system</a> that allows reporting Spark metrics to a variety of Sinks. The Spark 
runner reports user-defined Beam Aggregators using this same metrics system and 
currently supports <code>GraphiteSink</code> and <code>CSVSink</code>, and 
providing support for additional Sinks supported by Spark is easy and 
straight-forward.
+Metrics are also available via <a 
href="https://spark.apache.org/docs/latest/monitoring.html#rest-api";>REST 
API</a>.
+Spark provides a <a 
href="https://spark.apache.org/docs/latest/monitoring.html#metrics";>metrics 
system</a> that allows reporting Spark metrics to a variety of Sinks. The Spark 
runner reports user-defined Beam Aggregators using this same metrics system and 
currently supports <code>GraphiteSink</code> and <code>CSVSink</code>, and 
providing support for additional Sinks supported by Spark is easy and 
straight-forward.
 </span>
 <span class="language-py">Spark metrics are not yet supported on the portable 
runner.</span></p>
 
diff --git 
a/website/generated-content/documentation/sdks/java/euphoria/index.html 
b/website/generated-content/documentation/sdks/java/euphoria/index.html
index cf1fcd0..f618244 100644
--- a/website/generated-content/documentation/sdks/java/euphoria/index.html
+++ b/website/generated-content/documentation/sdks/java/euphoria/index.html
@@ -363,7 +363,7 @@ For each of the assigned windows the extracted value is 
accumulated using a user
 <p>Easy to use Java 8 API build on top of the Beam’s Java SDK. API provides a 
<a href="#operator-reference">high-level abstraction</a> of data 
transformations, with focus on the Java 8 language features (e.g. lambdas and 
streams). It is fully inter-operable with existing Beam SDK and convertible 
back and forth. It allows fast prototyping through use of (optional) <a 
href="https://github.com/EsotericSoftware/kryo";>Kryo</a> based coders, lambdas 
and high level operators and can be seamless [...]
 
 <p><a href="https://github.com/seznam/euphoria";>Euphoria API</a> project has 
been started in 2014, with a clear goal of providing the main building block 
for <a href="https://www.seznam.cz/";>Seznam.cz’s</a> data infrastructure.
-In 2015, <a href="http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf";>DataFlow 
whitepaper</a> inspired original authors to go one step further and also 
provide the unified API for both stream and batch processing.
+In 2015, <a href="https://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf";>DataFlow 
whitepaper</a> inspired original authors to go one step further and also 
provide the unified API for both stream and batch processing.
 The API has been open-sourced in 2016 and is still in active development. As 
the Beam’s community goal was very similar, we decided to contribute
 the API as a high level DSL over Beam Java SDK and share our effort with the 
community.</p>
 
diff --git 
a/website/generated-content/documentation/transforms/java/aggregation/hllcount/index.html
 
b/website/generated-content/documentation/transforms/java/aggregation/hllcount/index.html
index 9bd8439..1c34a69 100644
--- 
a/website/generated-content/documentation/transforms/java/aggregation/hllcount/index.html
+++ 
b/website/generated-content/documentation/transforms/java/aggregation/hllcount/index.html
@@ -504,7 +504,7 @@ limitations under the License.
 <p><br /></p>
 
 <p>Estimates the number of distinct elements in a data stream using the
-<a 
href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf";>HyperLogLog++
 algorithm</a>.
+<a 
href="https://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf";>HyperLogLog++
 algorithm</a>.
 The respective transforms to create and merge sketches, and to extract from 
them, are:</p>
 
 <ul>
diff --git a/website/generated-content/feed.xml 
b/website/generated-content/feed.xml
index e5ab106..0d5659e 100644
--- a/website/generated-content/feed.xml
+++ b/website/generated-content/feed.xml
@@ -1238,7 +1238,7 @@ in Java pipelines.&lt;/p&gt;
 
 &lt;p&gt;Beam also has a fancy new SQL command line that you can use to query 
your
 data interactively, be it Batch or Streaming. If you haven’t tried it, check 
out
-&lt;a 
href=&quot;http://bit.ly/ExploreBeamSQL&quot;&gt;http://bit.ly/ExploreBeamSQL&lt;/a&gt;.&lt;/p&gt;
+&lt;a 
href=&quot;https://bit.ly/ExploreBeamSQL&quot;&gt;https://bit.ly/ExploreBeamSQL&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;A nice feature of the SQL CLI is that you can use &lt;code 
class=&quot;highlighter-rouge&quot;&gt;CREATE EXTERNAL TABLE&lt;/code&gt;
 commands to &lt;em&gt;add&lt;/em&gt; data sources to be accessed in the CLI. 
Currently, the CLI
diff --git a/website/generated-content/get-started/beam-overview/index.html 
b/website/generated-content/get-started/beam-overview/index.html
index 0c361e0..accde9b 100644
--- a/website/generated-content/get-started/beam-overview/index.html
+++ b/website/generated-content/get-started/beam-overview/index.html
@@ -237,9 +237,9 @@ limitations under the License.
 
 <h1 id="apache-beam-overview">Apache Beam Overview</h1>
 
-<p>Apache Beam is an open source, unified model for defining both batch and 
streaming data-parallel processing pipelines. Using one of the open source Beam 
SDKs, you build a program that defines the pipeline. The pipeline is then 
executed by one of Beam’s supported <strong>distributed processing 
back-ends</strong>, which include <a href="http://apex.apache.org";>Apache 
Apex</a>, <a href="http://flink.apache.org";>Apache Flink</a>, <a 
href="http://spark.apache.org";>Apache Spark</a>, and <a  [...]
+<p>Apache Beam is an open source, unified model for defining both batch and 
streaming data-parallel processing pipelines. Using one of the open source Beam 
SDKs, you build a program that defines the pipeline. The pipeline is then 
executed by one of Beam’s supported <strong>distributed processing 
back-ends</strong>, which include <a href="https://apex.apache.org";>Apache 
Apex</a>, <a href="https://flink.apache.org";>Apache Flink</a>, <a 
href="https://spark.apache.org";>Apache Spark</a>, and  [...]
 
-<p>Beam is particularly useful for <a 
href="http://en.wikipedia.org/wiki/Embarassingly_parallel";>Embarrassingly 
Parallel</a> data processing tasks, in which the problem can be decomposed into 
many smaller bundles of data that can be processed independently and in 
parallel. You can also use Beam for Extract, Transform, and Load (ETL) tasks 
and pure data integration. These tasks are useful for moving data between 
different storage media and data sources, transforming data into a more desir 
[...]
+<p>Beam is particularly useful for <a 
href="https://en.wikipedia.org/wiki/Embarassingly_parallel";>Embarrassingly 
Parallel</a> data processing tasks, in which the problem can be decomposed into 
many smaller bundles of data that can be processed independently and in 
parallel. You can also use Beam for Extract, Transform, and Load (ETL) tasks 
and pure data integration. These tasks are useful for moving data between 
different storage media and data sources, transforming data into a more desi 
[...]
 
 <h2 id="apache-beam-sdks">Apache Beam SDKs</h2>
 
diff --git a/website/generated-content/get-started/downloads/index.html 
b/website/generated-content/get-started/downloads/index.html
index c0b767f..9d3d326 100644
--- a/website/generated-content/get-started/downloads/index.html
+++ b/website/generated-content/get-started/downloads/index.html
@@ -310,7 +310,7 @@ at scale.</p>
 <p>You <em>must</em> <a 
href="https://www.apache.org/info/verification.html";>verify</a> the integrity
 of downloaded files. We provide OpenPGP signatures for every release file. This
 signature should be matched against the
-<a href="https://www.apache.org/dist/beam/KEYS";>KEYS</a> file which contains 
the OpenPGP
+<a href="https://downloads.apache.org/beam/KEYS";>KEYS</a> file which contains 
the OpenPGP
 keys of Apache Beam’s Release Managers. We also provide SHA-512 checksums for
 every release file (or SHA-1 and MD5 checksums for older releases). After you
 download the file, you should calculate a checksum for your download, and make
@@ -318,7 +318,7 @@ sure it is the same as ours.</p>
 
 <h2 id="api-stability">API stability</h2>
 
-<p>Apache Beam uses <a href="http://semver.org/";>semantic versioning</a>. 
Version numbers use
+<p>Apache Beam uses <a href="https://semver.org/";>semantic versioning</a>. 
Version numbers use
 the form <code class="highlighter-rouge">major.minor.incremental</code> and 
are incremented as follows:</p>
 
 <ul>
@@ -337,69 +337,69 @@ versions denoted <code 
class="highlighter-rouge">0.x.y</code>.</p>
 
 <h3 id="2190-2020-02-04">2.19.0 (2020-02-04)</h3>
 <p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.19.0/apache-beam-2.19.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.19.0/apache-beam-2.19.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.19.0/apache-beam-2.19.0-source-release.zip.asc";>signature</a>.</p>
+<a 
href="https://downloads.apache.org/beam/2.19.0/apache-beam-2.19.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://downloads.apache.org/beam/2.19.0/apache-beam-2.19.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12346582";>Release
 notes</a>.</p>
 
 <h3 id="2180-2020-01-23">2.18.0 (2020-01-23)</h3>
-<p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.18.0/apache-beam-2.18.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.18.0/apache-beam-2.18.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.18.0/apache-beam-2.18.0-source-release.zip.asc";>signature</a>.</p>
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.18.0/apache-beam-2.18.0-source-release.zip";>source
 code download</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.18.0/apache-beam-2.18.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.18.0/apache-beam-2.18.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12346383&amp;projectId=12319527";>Release
 notes</a>.</p>
 
 <h3 id="2170-2020-01-06">2.17.0 (2020-01-06)</h3>
-<p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.17.0/apache-beam-2.17.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.17.0/apache-beam-2.17.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.17.0/apache-beam-2.17.0-source-release.zip.asc";>signature</a>.</p>
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.17.0/apache-beam-2.17.0-source-release.zip";>source
 code download</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.17.0/apache-beam-2.17.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.17.0/apache-beam-2.17.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12345970";>Release
 notes</a>.</p>
 
 <h3 id="2160-2019-10-07">2.16.0 (2019-10-07)</h3>
-<p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.16.0/apache-beam-2.16.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.16.0/apache-beam-2.16.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.16.0/apache-beam-2.16.0-source-release.zip.asc";>signature</a>.</p>
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.16.0/apache-beam-2.16.0-source-release.zip";>source
 code download</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.16.0/apache-beam-2.16.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.16.0/apache-beam-2.16.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12345494";>Release
 notes</a>.</p>
 
 <h3 id="2150-2019-08-22">2.15.0 (2019-08-22)</h3>
-<p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.15.0/apache-beam-2.15.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.15.0/apache-beam-2.15.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.15.0/apache-beam-2.15.0-source-release.zip.asc";>signature</a>.</p>
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.15.0/apache-beam-2.15.0-source-release.zip";>source
 code download</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.15.0/apache-beam-2.15.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.15.0/apache-beam-2.15.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12345489";>Release
 notes</a>.</p>
 
 <h3 id="2140-2019-08-01">2.14.0 (2019-08-01)</h3>
-<p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.14.0/apache-beam-2.14.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.14.0/apache-beam-2.14.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.14.0/apache-beam-2.14.0-source-release.zip.asc";>signature</a>.</p>
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.14.0/apache-beam-2.14.0-source-release.zip";>source
 code download</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.14.0/apache-beam-2.14.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.14.0/apache-beam-2.14.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12345431";>Release
 notes</a>.</p>
 
 <h3 id="2130-2019-05-21">2.13.0 (2019-05-21)</h3>
-<p>Official <a 
href="http://www.apache.org/dyn/closer.cgi/beam/2.13.0/apache-beam-2.13.0-source-release.zip";>source
 code download</a>.
-<a 
href="https://www.apache.org/dist/beam/2.13.0/apache-beam-2.13.0-source-release.zip.sha512";>SHA-512</a>.
-<a 
href="https://www.apache.org/dist/beam/2.13.0/apache-beam-2.13.0-source-release.zip.asc";>signature</a>.</p>
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.13.0/apache-beam-2.13.0-source-release.zip";>source
 code download</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.13.0/apache-beam-2.13.0-source-release.zip.sha512";>SHA-512</a>.
+<a 
href="https://archive.apache.org/dist/beam/2.13.0/apache-beam-2.13.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://jira.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12345166";>Release
 notes</a>.</p>
 
 <h3 id="2120-2019-04-25">2.12.0 (2019-04-25)</h3>
-<p>Official <a 
href="http://archive.apache.org/dyn/closer.cgi/beam/2.12.0/apache-beam-2.12.0-source-release.zip";>source
 code download</a>.
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.12.0/apache-beam-2.12.0-source-release.zip";>source
 code download</a>.
 <a 
href="https://archive.apache.org/dist/beam/2.12.0/apache-beam-2.12.0-source-release.zip.sha512";>SHA-512</a>.
 <a 
href="https://archive.apache.org/dist/beam/2.12.0/apache-beam-2.12.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://jira.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12344944";>Release
 notes</a>.</p>
 
 <h3 id="2110-2019-02-26">2.11.0 (2019-02-26)</h3>
-<p>Official <a 
href="http://archive.apache.org/dyn/closer.cgi/beam/2.11.0/apache-beam-2.11.0-source-release.zip";>source
 code download</a>.
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.11.0/apache-beam-2.11.0-source-release.zip";>source
 code download</a>.
 <a 
href="https://archive.apache.org/dist/beam/2.11.0/apache-beam-2.11.0-source-release.zip.sha512";>SHA-512</a>.
 <a 
href="https://archive.apache.org/dist/beam/2.11.0/apache-beam-2.11.0-source-release.zip.asc";>signature</a>.</p>
 
 <p><a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&amp;version=12344775";>Release
 notes</a>.</p>
 
 <h3 id="2100-2019-02-01">2.10.0 (2019-02-01)</h3>
-<p>Official <a 
href="http://archive.apache.org/dyn/closer.cgi/beam/2.10.0/apache-beam-2.10.0-source-release.zip";>source
 code download</a>.
+<p>Official <a 
href="https://archive.apache.org/dist/beam/2.10.0/apache-beam-2.10.0-source-release.zip";>source
 code download</a>.
 <a 
href="https://archive.apache.org/dist/beam/2.10.0/apache-beam-2.10.0-source-release.zip.sha512";>SHA-512</a>.
 <a 
href="https://archive.apache.org/dist/beam/2.10.0/apache-beam-2.10.0-source-release.zip.asc";>signature</a>.</p>
 
diff --git a/website/generated-content/get-started/index.html 
b/website/generated-content/get-started/index.html
index a44072b..60c6a58 100644
--- a/website/generated-content/get-started/index.html
+++ b/website/generated-content/get-started/index.html
@@ -255,7 +255,7 @@ limitations under the License.
 
 <h4 id="support"><a href="/get-started/support">Support</a></h4>
 
-<p>Find resources, such as mailing lists and issue tracking, to help you use 
Beam. Ask questions and discuss topics via <a 
href="http://stackoverflow.com/questions/tagged/apache-beam";>Stack Overflow</a> 
or on Beam’s <a href="http://apachebeam.slack.com";>Slack Channel</a>.</p>
+<p>Find resources, such as mailing lists and issue tracking, to help you use 
Beam. Ask questions and discuss topics via <a 
href="https://stackoverflow.com/questions/tagged/apache-beam";>Stack 
Overflow</a> or on Beam’s <a href="https://apachebeam.slack.com";>Slack 
Channel</a>.</p>
 
       </div>
     </div>
diff --git a/website/generated-content/get-started/quickstart-java/index.html 
b/website/generated-content/get-started/quickstart-java/index.html
index 02c74cf..ea57914 100644
--- a/website/generated-content/get-started/quickstart-java/index.html
+++ b/website/generated-content/get-started/quickstart-java/index.html
@@ -254,10 +254,10 @@ limitations under the License.
 
 <ol>
   <li>
-    <p>Download and install the <a 
href="http://www.oracle.com/technetwork/java/javase/downloads/index.html";>Java 
Development Kit (JDK)</a> version 8. Verify that the <a 
href="https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/envvars001.html";>JAVA_HOME</a>
 environment variable is set and points to your JDK installation.</p>
+    <p>Download and install the <a 
href="https://www.oracle.com/technetwork/java/javase/downloads/index.html";>Java 
Development Kit (JDK)</a> version 8. Verify that the <a 
href="https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/envvars001.html";>JAVA_HOME</a>
 environment variable is set and points to your JDK installation.</p>
   </li>
   <li>
-    <p>Download and install <a 
href="http://maven.apache.org/download.cgi";>Apache Maven</a> by following 
Maven’s <a href="http://maven.apache.org/install.html";>installation guide</a> 
for your specific operating system.</p>
+    <p>Download and install <a 
href="https://maven.apache.org/download.cgi";>Apache Maven</a> by following 
Maven’s <a href="https://maven.apache.org/install.html";>installation guide</a> 
for your specific operating system.</p>
   </li>
 </ol>
 
diff --git a/website/generated-content/get-started/quickstart-py/index.html 
b/website/generated-content/get-started/quickstart-py/index.html
index 2429bcb..735e687 100644
--- a/website/generated-content/get-started/quickstart-py/index.html
+++ b/website/generated-content/get-started/quickstart-py/index.html
@@ -300,7 +300,7 @@ install it. This command might require administrative 
privileges.</p>
 
 <h3 id="install-python-virtual-environment">Install Python virtual 
environment</h3>
 
-<p>It is recommended that you install a <a 
href="http://docs.python-guide.org/en/latest/dev/virtualenvs/";>Python virtual 
environment</a>
+<p>It is recommended that you install a <a 
href="https://docs.python-guide.org/en/latest/dev/virtualenvs/";>Python virtual 
environment</a>
 for initial experiments. If you do not have <code 
class="highlighter-rouge">virtualenv</code> version 13.1.0 or
 newer, run the following command to install it. This command might require
 administrative privileges.</p>
diff --git a/website/generated-content/index.html 
b/website/generated-content/index.html
index d7a5ff6..d9846ef 100644
--- a/website/generated-content/index.html
+++ b/website/generated-content/index.html
@@ -267,15 +267,15 @@ limitations under the License.
   <div class="logos__logos">
     
     <div class="logos__logos__logo">
-      <a href="http://apex.apache.org";><img src="/images/logo_apex.png" 
alt="APEX" /></a>
+      <a href="https://apex.apache.org";><img src="/images/logo_apex.png" 
alt="APEX" /></a>
     </div>
     
     <div class="logos__logos__logo">
-      <a href="http://flink.apache.org";><img src="/images/logo_flink.png" 
alt="Flink" /></a>
+      <a href="https://flink.apache.org";><img src="/images/logo_flink.png" 
alt="Flink" /></a>
     </div>
     
     <div class="logos__logos__logo">
-      <a href="http://spark.apache.org/";><img src="/images/logo_spark.png" 
alt="Spark" /></a>
+      <a href="https://spark.apache.org/";><img src="/images/logo_spark.png" 
alt="Spark" /></a>
     </div>
     
     <div class="logos__logos__logo">
@@ -283,11 +283,11 @@ limitations under the License.
     </div>
     
     <div class="logos__logos__logo">
-      <a href="http://gearpump.apache.org/";><img 
src="/images/logo_gearpump.png" alt="Gearpump" /></a>
+      <a href="https://gearpump.apache.org/";><img 
src="/images/logo_gearpump.png" alt="Gearpump" /></a>
     </div>
     
     <div class="logos__logos__logo">
-      <a href="http://samza.apache.org/";><img src="/images/logo_samza.png" 
alt="Samza" /></a>
+      <a href="https://samza.apache.org/";><img src="/images/logo_samza.png" 
alt="Samza" /></a>
     </div>
     
   </div>
diff --git a/website/generated-content/privacy_policy/index.html 
b/website/generated-content/privacy_policy/index.html
index f2a43c1..395f707 100644
--- a/website/generated-content/privacy_policy/index.html
+++ b/website/generated-content/privacy_policy/index.html
@@ -177,7 +177,7 @@ limitations under the License.
   <li>The addresses of pages from where you followed a link to our site.</li>
 </ol>
 
-<p>Part of this information is gathered using a tracking cookie set by the <a 
href="http://www.google.com/analytics/";>Google Analytics</a> service and 
handled by Google as described in <a 
href="http://www.google.com/privacy.html";>their privacy policy</a>. See your 
browser documentation for instructions on how to disable the cookie if you 
prefer not to share this data with Google.</p>
+<p>Part of this information is gathered using a tracking cookie set by the <a 
href="https://www.google.com/analytics/";>Google Analytics</a> service and 
handled by Google as described in <a 
href="https://policies.google.com/privacy";>their privacy policy</a>. See your 
browser documentation for instructions on how to disable the cookie if you 
prefer not to share this data with Google.</p>
 
 <p>We use the gathered information to help us make our site more useful to 
visitors and to better understand how and when our site is used. We do not 
track or collect personally identifiable information or associate gathered data 
with any personally identifying information from other sources.</p>
 

Reply via email to