Replace most http links with https as a best practice, where possible

Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/62cf4a16
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/62cf4a16
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/62cf4a16

Branch: refs/heads/asf-site
Commit: 62cf4a16daae3cf1b68745b8f676dbb29c167af2
Parents: c2c0905
Author: Sean Owen <so...@cloudera.com>
Authored: Wed May 10 10:56:35 2017 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Wed May 10 19:02:39 2017 +0100

----------------------------------------------------------------------
 _config.yml                    |   2 +-
 community.md                   |   8 +-
 contributing.md                |  10 +-
 developer-tools.md             |   8 +-
 documentation.md               |  40 ++---
 downloads.md                   |   4 +-
 examples.md                    |  10 +-
 faq.md                         |   6 +-
 index.md                       |  12 +-
 mllib/index.md                 |   4 +-
 powered-by.md                  |  12 +-
 release-process.md             |   6 +-
 robots.txt                     |   2 +-
 site/community.html            |   8 +-
 site/contributing.html         |  10 +-
 site/developer-tools.html      |   8 +-
 site/documentation.html        |  40 ++---
 site/downloads.html            |   4 +-
 site/examples.html             |  10 +-
 site/faq.html                  |   6 +-
 site/index.html                |  12 +-
 site/mailing-lists.html        |   2 +-
 site/mllib/index.html          |   4 +-
 site/powered-by.html           |  15 +-
 site/release-process.html      |   6 +-
 site/robots.txt                |   2 +-
 site/sitemap.xml               | 332 ++++++++++++++++++------------------
 site/streaming/index.html      |   8 +-
 site/third-party-projects.html |   8 +-
 site/trademarks.html           |   2 +-
 sitemap.xml                    |  52 +++---
 streaming/index.md             |   8 +-
 third-party-projects.md        |   8 +-
 trademarks.md                  |   2 +-
 34 files changed, 332 insertions(+), 339 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/_config.yml
----------------------------------------------------------------------
diff --git a/_config.yml b/_config.yml
index 18ba30f..9a3934e 100644
--- a/_config.yml
+++ b/_config.yml
@@ -6,4 +6,4 @@ permalink: none
 destination: site
 exclude: ['README.md','content']
 keep_files: ['docs']
-url: http://spark.apache.org
\ No newline at end of file
+url: https://spark.apache.org
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/community.md
----------------------------------------------------------------------
diff --git a/community.md b/community.md
index 9fcb2b5..9fc6136 100644
--- a/community.md
+++ b/community.md
@@ -15,18 +15,18 @@ navigation:
 <h4>StackOverflow</h4>
 
 For usage questions and help (e.g. how to use this Spark API), it is 
recommended you use the 
-StackOverflow tag <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";>`apache-spark`</a>
 
+StackOverflow tag <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";>`apache-spark`</a>
 
 as it is an active forum for Spark users' questions and answers.
 
 Some quick tips when using StackOverflow:
 
 - Prior to asking submitting questions, please:
   - Search StackOverflow's 
-  <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";>`apache-spark`</a>
 tag to see if 
+  <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";>`apache-spark`</a>
 tag to see if 
   your question has already been answered
   - Search the nabble archive for
   <a 
href="http://apache-spark-user-list.1001560.n3.nabble.com/";>us...@spark.apache.org</a>
 
-- Please follow the StackOverflow <a 
href="http://stackoverflow.com/help/how-to-ask";>code of conduct</a>  
+- Please follow the StackOverflow <a 
href="https://stackoverflow.com/help/how-to-ask";>code of conduct</a>  
 - Always use the `apache-spark` tag when asking questions
 - Please also use a secondary tag to specify components so subject matter 
experts can more easily find them.
  Examples include: `pyspark`, `spark-dataframe`, `spark-streaming`, `spark-r`, 
`spark-mllib`, 
@@ -58,7 +58,7 @@ project, and scenarios, it is recommended you use the 
u...@spark.apache.org mail
 Some quick tips when using email:
 
 - Prior to asking submitting questions, please:
-  - Search StackOverflow at <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";>`apache-spark`</a>
 
+  - Search StackOverflow at <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";>`apache-spark`</a>
 
   to see if your question has already been answered
   - Search the nabble archive for
   <a 
href="http://apache-spark-user-list.1001560.n3.nabble.com/";>us...@spark.apache.org</a>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/contributing.md
----------------------------------------------------------------------
diff --git a/contributing.md b/contributing.md
index 699b468..99a7edc 100644
--- a/contributing.md
+++ b/contributing.md
@@ -43,7 +43,7 @@ feedback on any performance or correctness issues found in 
the newer release.
 <h2>Contributing by Reviewing Changes</h2>
 
 Changes to Spark source code are proposed, reviewed and committed via 
-<a href="http://github.com/apache/spark/pulls";>Github pull requests</a> 
(described later). 
+<a href="https://github.com/apache/spark/pulls";>Github pull requests</a> 
(described later). 
 Anyone can view and comment on active changes here. 
 Reviewing others' changes is a good way to learn how the change process works 
and gain exposure 
 to activity in various parts of the code. You can help by reviewing the 
changes and asking 
@@ -74,7 +74,7 @@ learning algorithms can happily exist outside of MLlib.
 
 To that end, large and independent new functionality is often rejected for 
inclusion in Spark 
 itself, but, can and should be hosted as a separate project and repository, 
and included in 
-the <a href="http://spark-packages.org/";>spark-packages.org</a> collection.
+the <a href="https://spark-packages.org/";>spark-packages.org</a> collection.
 
 <h2>Contributing Bug Reports</h2>
 
@@ -89,7 +89,7 @@ first. Unreproducible bugs, or simple error reports, may be 
closed.
 
 It is possible to propose new features as well. These are generally not 
helpful unless 
 accompanied by detail, such as a design document and/or code change. Large new 
contributions 
-should consider <a href="http://spark-packages.org/";>spark-packages.org</a> 
first (see above), 
+should consider <a href="https://spark-packages.org/";>spark-packages.org</a> 
first (see above), 
 or be discussed on the mailing 
 list first. Feature requests may be rejected, or closed after a long period of 
inactivity.
 
@@ -194,7 +194,7 @@ rather than receive iterations of review.
 - Introduces complex new functionality, especially an API that needs to be 
supported
 - Adds complexity that only helps a niche use case
 - Adds user-space functionality that does not need to be maintained in Spark, 
but could be hosted 
-externally and indexed by <a 
href="http://spark-packages.org/";>spark-packages.org</a> 
+externally and indexed by <a 
href="https://spark-packages.org/";>spark-packages.org</a> 
 - Changes a public API or semantics (rarely allowed)
 - Adds large dependencies
 - Changes versions of existing dependencies
@@ -260,7 +260,7 @@ Example: `Fix typos in Foo scaladoc`
 <h3>Pull Request</h3>
 
 1. <a href="https://help.github.com/articles/fork-a-repo/";>Fork</a> the Github 
repository at 
-<a href="http://github.com/apache/spark";>http://github.com/apache/spark</a> if 
you haven't already
+<a href="https://github.com/apache/spark";>https://github.com/apache/spark</a> 
if you haven't already
 1. Clone your fork, create a new branch, push commits to the branch.
 1. Consider whether documentation or tests need to be added or updated as part 
of the change, 
 and add them as needed.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/developer-tools.md
----------------------------------------------------------------------
diff --git a/developer-tools.md b/developer-tools.md
index 17f7b26..dab3e8a 100644
--- a/developer-tools.md
+++ b/developer-tools.md
@@ -279,7 +279,7 @@ To create a Spark project for IntelliJ:
 - In the Import wizard, it's fine to leave settings at their default. However 
it is usually useful 
 to enable "Import Maven projects automatically", since changes to the project 
structure will 
 automatically update the IntelliJ project.
-- As documented in <a 
href="http://spark.apache.org/docs/latest/building-spark.html";>Building 
Spark</a>, 
+- As documented in <a 
href="https://spark.apache.org/docs/latest/building-spark.html";>Building 
Spark</a>, 
 some build configurations require specific profiles to be 
 enabled. The same profiles that are enabled with `-P[profile name]` above may 
be enabled on the 
 Profiles screen in the Import wizard. For example, if developing for Hadoop 
2.7 with YARN support, 
@@ -363,14 +363,14 @@ incorporated into a maintenance release. These should 
only be used by Spark deve
 may have bugs and have not undergone the same level of testing as releases. 
Spark nightly packages 
 are available at:
 
-- Latest master build: <a 
href="http://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest";>http://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest</a>
-- All nightly builds: <a 
href="http://people.apache.org/~pwendell/spark-nightly/";>http://people.apache.org/~pwendell/spark-nightly/</a>
+- Latest master build: <a 
href="https://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest";>https://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest</a>
+- All nightly builds: <a 
href="https://people.apache.org/~pwendell/spark-nightly/";>https://people.apache.org/~pwendell/spark-nightly/</a>
 
 Spark also publishes SNAPSHOT releases of its Maven artifacts for both master 
and maintenance 
 branches on a nightly basis. To link to a SNAPSHOT you need to add the ASF 
snapshot 
 repository to your build. Note that SNAPSHOT artifacts are ephemeral and may 
change or
 be removed. To use these you must add the ASF snapshot repository at 
-<a 
href="http://repository.apache.org/snapshots/";>http://repository.apache.org/snapshots/<a>.
+<a href="https://repository.apache.org/snapshots/<a>.
 
 ```
 groupId: org.apache.spark

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/documentation.md
----------------------------------------------------------------------
diff --git a/documentation.md b/documentation.md
index 6d7afdd..b4874c3 100644
--- a/documentation.md
+++ b/documentation.md
@@ -50,7 +50,7 @@ navigation:
 <p>In addition, this page lists other resources for learning Spark.</p>
 
 <h3>Videos</h3>
-See the <a 
href="http://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w";>Apache Spark 
YouTube Channel</a> for videos from Spark events. There are separate <a 
href="http://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w/playlists";>playlists</a>
 for videos of different topics. Besides browsing through playlists, you can 
also find direct links to videos below.
+See the <a 
href="https://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w";>Apache Spark 
YouTube Channel</a> for videos from Spark events. There are separate <a 
href="https://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w/playlists";>playlists</a>
 for videos of different topics. Besides browsing through playlists, you can 
also find direct links to videos below.
 
 <h4>Screencast Tutorial Videos</h4>
 <ul>
@@ -65,17 +65,17 @@ See the <a 
href="http://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w";>Apache
 <ul>
   <li>Videos from Spark Summit 2014, San Francisco, June 30 - July 2 2013
     <ul>
-      <li><a href="http://spark-summit.org/2014/agenda";>Full agenda with links 
to all videos and slides</a></li>
-      <li><a href="http://spark-summit.org/2014/training";>Training videos and 
slides</a></li>
+      <li><a href="https://spark-summit.org/2014/agenda";>Full agenda with 
links to all videos and slides</a></li>
+      <li><a href="https://spark-summit.org/2014/training";>Training videos and 
slides</a></li>
     </ul>
   </li>
   <li>Videos from Spark Summit 2013, San Francisco, Dec 2-3 2013
     <ul>
-      <li><a href="http://spark-summit.org/2013#agendapluginwidget-4";>Full 
agenda with links to all videos and slides</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwjXj33QvAXN0Vlx0gc6u0je";>YouTube
 playist of all Keynotes</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track A (Spark Applications)</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track B (Spark Deployment, Scheduling & Perf, Related 
projects)</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwjR1Umntxz52zv3EcKpbzCp";>YouTube
 playist of the Training Day (i.e. the 2nd day of the summit)</a></li>
+      <li><a href="https://spark-summit.org/2013#agendapluginwidget-4";>Full 
agenda with links to all videos and slides</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwjXj33QvAXN0Vlx0gc6u0je";>YouTube
 playist of all Keynotes</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track A (Spark Applications)</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track B (Spark Deployment, Scheduling & Perf, Related 
projects)</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwjR1Umntxz52zv3EcKpbzCp";>YouTube
 playist of the Training Day (i.e. the 2nd day of the summit)</a></li>
     </ul>
   </li>
 </ul>
@@ -88,21 +88,21 @@ In addition to the videos listed below, you can also view 
<a href="http://www.me
   }
 </style>
 <ul>
-  <li><a 
href="http://www.youtube.com/watch?v=NUQ-8to2XAk&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Spark
 1.0 and Beyond</a> (<a 
href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt";>slides</a>) 
<span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 
2014-04-23</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=NUQ-8to2XAk&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Spark
 1.0 and Beyond</a> (<a 
href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt";>slides</a>) 
<span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 
2014-04-23</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=ju2OQEXqONU&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Adding
 Native SQL Support to Spark with Catalyst</a> (<a 
href="http://files.meetup.com/3138542/Spark%20SQL%20Meetup%20-%204-8-2012.pdf";>slides</a>)
 <span class="video-meta-info">by Michael Armbrust, at Tagged in SF, 
2014-04-08</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=ju2OQEXqONU&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Adding
 Native SQL Support to Spark with Catalyst</a> (<a 
href="http://files.meetup.com/3138542/Spark%20SQL%20Meetup%20-%204-8-2012.pdf";>slides</a>)
 <span class="video-meta-info">by Michael Armbrust, at Tagged in SF, 
2014-04-08</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=MY0NkZY_tJw&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>SparkR
 and GraphX</a> (slides: <a 
href="http://files.meetup.com/3138542/SparkR-meetup.pdf";>SparkR</a>, <a 
href="http://files.meetup.com/3138542/graphx%40spark_meetup03_2014.pdf";>GraphX</a>)
 <span class="video-meta-info">by Shivaram Venkataraman &amp; Dan Crankshaw, at 
SkyDeck in Berkeley, 2014-03-25</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=MY0NkZY_tJw&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>SparkR
 and GraphX</a> (slides: <a 
href="http://files.meetup.com/3138542/SparkR-meetup.pdf";>SparkR</a>, <a 
href="http://files.meetup.com/3138542/graphx%40spark_meetup03_2014.pdf";>GraphX</a>)
 <span class="video-meta-info">by Shivaram Venkataraman &amp; Dan Crankshaw, at 
SkyDeck in Berkeley, 2014-03-25</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=5niXiiEX5pE&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Simple
 deployment w/ SIMR &amp; Advanced Shark Analytics w/ TGFs</a> (<a 
href="http://files.meetup.com/3138542/tgf.pptx";>slides</a>) <span 
class="video-meta-info">by Ali Ghodsi, at Huawei in Santa Clara, 
2014-02-05</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=5niXiiEX5pE&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Simple
 deployment w/ SIMR &amp; Advanced Shark Analytics w/ TGFs</a> (<a 
href="http://files.meetup.com/3138542/tgf.pptx";>slides</a>) <span 
class="video-meta-info">by Ali Ghodsi, at Huawei in Santa Clara, 
2014-02-05</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=C7gWtxelYNM&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Stores,
 Monoids &amp; Dependency Injection - Abstractions for Spark</a> (<a 
href="http://files.meetup.com/3138542/Abstractions%20for%20spark%20streaming%20-%20spark%20meetup%20presentation.pdf";>slides</a>)
 <span class="video-meta-info">by Ryan Weald, at Sharethrough in SF, 
2014-01-17</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=C7gWtxelYNM&list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Stores,
 Monoids &amp; Dependency Injection - Abstractions for Spark</a> (<a 
href="http://files.meetup.com/3138542/Abstractions%20for%20spark%20streaming%20-%20spark%20meetup%20presentation.pdf";>slides</a>)
 <span class="video-meta-info">by Ryan Weald, at Sharethrough in SF, 
2014-01-17</span></li>
 
   <li><a href="https://www.youtube.com/watch?v=IxDnF_X4M-8";>Distributed 
Machine Learning using MLbase</a> (<a 
href="http://files.meetup.com/3138542/sparkmeetup_8_6_13_final_reduced.pdf";>slides</a>)
 <span class="video-meta-info">by Evan Sparks &amp; Ameet Talwalkar, at Twitter 
in SF, 2013-08-06</span></li>
 
   <li><a href="https://www.youtube.com/watch?v=vJQ2RZj9hqs";>GraphX Preview: 
Graph Analysis on Spark</a> <span class="video-meta-info">by Reynold Xin &amp; 
Joseph Gonzalez, at Flurry in SF, 2013-07-02</span></li>
 
-  <li><a href="http://www.youtube.com/watch?v=D1knCQZQQnw";>Deep Dive with 
Spark Streaming</a> (<a 
href="http://www.slideshare.net/spark-project/deep-divewithsparkstreaming-tathagatadassparkmeetup20130617";>slides</a>)
 <span class="video-meta-info">by Tathagata Das, at Plug and Play in Sunnyvale, 
2013-06-17</span></li>
+  <li><a href="https://www.youtube.com/watch?v=D1knCQZQQnw";>Deep Dive with 
Spark Streaming</a> (<a 
href="http://www.slideshare.net/spark-project/deep-divewithsparkstreaming-tathagatadassparkmeetup20130617";>slides</a>)
 <span class="video-meta-info">by Tathagata Das, at Plug and Play in Sunnyvale, 
2013-06-17</span></li>
 
   <li><a href="https://www.youtube.com/watch?v=cAZ624-69PQ";>Tachyon and Shark 
update</a> (slides: <a 
href="http://files.meetup.com/3138542/2013-05-09%20Shark%20%40%20Spark%20Meetup.pdf";>Shark</a>,
 <a 
href="http://files.meetup.com/3138542/Tachyon_2013-05-09_Spark_Meetup.pdf";>Tachyon</a>)
 <span class="video-meta-info">by Ali Ghodsi, Haoyuan Li, Reynold Xin, Google 
Ventures, 2013-05-09</span></li>
 
@@ -119,9 +119,9 @@ In addition to the videos listed below, you can also view 
<a href="http://www.me
 <a name="summit"></a>
 <h3>Training Materials</h3>
 <ul>
-  <li><a href="http://spark-summit.org/2014/training";>Training materials and 
exercises from Spark Summit 2014</a> are available online. These include videos 
and slides of talks as well as exercises you can run on your laptop. Topics 
include Spark core, tuning and debugging, Spark SQL, Spark Streaming, GraphX 
and MLlib.</li>
-  <li><a href="http://spark-summit.org/2013";>Spark Summit 2013</a> included a 
training session, with slides and videos available on <a 
href="http://spark-summit.org/summit-2013/#agendapluginwidget-5";>the training 
day agenda</a>.
-    The session also included <a 
href="http://spark-summit.org/2013/exercises/";>exercises</a> that you can walk 
through on Amazon EC2.</li>
+  <li><a href="https://spark-summit.org/2014/training";>Training materials and 
exercises from Spark Summit 2014</a> are available online. These include videos 
and slides of talks as well as exercises you can run on your laptop. Topics 
include Spark core, tuning and debugging, Spark SQL, Spark Streaming, GraphX 
and MLlib.</li>
+  <li><a href="https://spark-summit.org/2013";>Spark Summit 2013</a> included a 
training session, with slides and videos available on <a 
href="https://spark-summit.org/summit-2013/#agendapluginwidget-5";>the training 
day agenda</a>.
+    The session also included <a 
href="https://spark-summit.org/2013/exercises/";>exercises</a> that you can walk 
through on Amazon EC2.</li>
   <li>The <a href="https://amplab.cs.berkeley.edu/";>UC Berkeley AMPLab</a> 
regularly hosts training camps on Spark and related projects.
 Slides, videos and EC2-based exercises from each of these are available online:
 <ul>
@@ -137,8 +137,8 @@ Slides, videos and EC2-based exercises from each of these 
are available online:
 <h3>Hands-On Exercises</h3>
 
 <ul>
-  <li><a href="http://spark-summit.org/2014/training";>Hands-on exercises from 
Spark Summit 2014</a>. These let you install Spark on your laptop and learn 
basic concepts, Spark SQL, Spark Streaming, GraphX and MLlib.</li>
-  <li><a href="http://spark-summit.org/2013/exercises/";>Hands-on exercises 
from Spark Summit 2013</a>. These exercises let you launch a small EC2 cluster, 
load a dataset, and query it with Spark, Shark, Spark Streaming, and MLlib.</li>
+  <li><a href="https://spark-summit.org/2014/training";>Hands-on exercises from 
Spark Summit 2014</a>. These let you install Spark on your laptop and learn 
basic concepts, Spark SQL, Spark Streaming, GraphX and MLlib.</li>
+  <li><a href="https://spark-summit.org/2013/exercises/";>Hands-on exercises 
from Spark Summit 2013</a>. These exercises let you launch a small EC2 cluster, 
load a dataset, and query it with Spark, Shark, Spark Streaming, and MLlib.</li>
 </ul>
 
 <h3>External Tutorials, Blog Posts, and Talks</h3>
@@ -146,7 +146,7 @@ Slides, videos and EC2-based exercises from each of these 
are available online:
 <ul>
   <li><a 
href="http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark";>Using 
Parquet and Scrooge with Spark</a> &mdash; Scala-friendly Parquet and Avro 
usage tutorial from Ooyala's Evan Chan</li>
   <li><a 
href="http://codeforhire.com/2014/02/18/using-spark-with-mongodb/";>Using Spark 
with MongoDB</a> &mdash; by Sampo Niskanen from Wellmo</li>
-  <li><a href="http://spark-summit.org/2013";>Spark Summit 2013</a> &mdash; 
contained 30 talks about Spark use cases, available as slides and videos</li>
+  <li><a href="https://spark-summit.org/2013";>Spark Summit 2013</a> &mdash; 
contained 30 talks about Spark use cases, available as slides and videos</li>
   <li><a href="http://zenfractal.com/2013/08/21/a-powerful-big-data-trio/";>A 
Powerful Big Data Trio: Spark, Parquet and Avro</a> &mdash; Using Parquet in 
Spark by Matt Massie</li>
   <li><a 
href="http://www.slideshare.net/EvanChan2/cassandra2013-spark-talk-final";>Real-time
 Analytics with Cassandra, Spark, and Shark</a> &mdash; Presentation by Evan 
Chan from Ooyala at 2013 Cassandra Summit</li>
   <li><a 
href="http://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923";>Run 
Spark and Shark on Amazon Elastic MapReduce</a> &mdash; Article by Amazon 
Elastic MapReduce team member Parviz Deyhim</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/downloads.md
----------------------------------------------------------------------
diff --git a/downloads.md b/downloads.md
index 7806416..16d6062 100644
--- a/downloads.md
+++ b/downloads.md
@@ -31,7 +31,7 @@ $(document).ready(function() {
 
 _Note: Starting version 2.0, Spark is built with Scala 2.11 by default.
 Scala 2.10 users should download the Spark source package and build
-[with Scala 2.10 
support](http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-210)._
+[with Scala 2.10 
support](https://spark.apache.org/docs/latest/building-spark.html#building-for-scala-210)._
 
 <!--
 ### Latest Preview Release
@@ -47,7 +47,7 @@ You can select and download it above.
 -->
 
 ### Link with Spark
-Spark artifacts are [hosted in Maven 
Central](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.spark%22).
 You can add a Maven dependency with the following coordinates:
+Spark artifacts are [hosted in Maven 
Central](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.spark%22).
 You can add a Maven dependency with the following coordinates:
 
     groupId: org.apache.spark
     artifactId: spark-core_2.11

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/examples.md
----------------------------------------------------------------------
diff --git a/examples.md b/examples.md
index 2d1dbaa..fe9cc79 100644
--- a/examples.md
+++ b/examples.md
@@ -11,13 +11,13 @@ navigation:
 These examples give a quick overview of the Spark API.
 Spark is built on the concept of <em>distributed datasets</em>, which contain 
arbitrary Java or
 Python objects. You create a dataset from external data, then apply parallel 
operations
-to it. The building block of the Spark API is its [RDD 
API](http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds).
+to it. The building block of the Spark API is its [RDD 
API](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds).
 In the RDD API,
 there are two types of operations: <em>transformations</em>, which define a 
new dataset based on previous ones,
 and <em>actions</em>, which kick off a job to execute on a cluster.
 On top of Spark’s RDD API, high level APIs are provided, e.g.
-[DataFrame 
API](http://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes) 
and
-[Machine Learning API](http://spark.apache.org/docs/latest/mllib-guide.html).
+[DataFrame 
API](https://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes)
 and
+[Machine Learning API](https://spark.apache.org/docs/latest/mllib-guide.html).
 These high level APIs provide a concise way to conduct certain data operations.
 In this page, we will show examples using RDD API as well as examples using 
high level APIs.
 
@@ -129,7 +129,7 @@ System.out.println("Pi is roughly " + 4.0 * count / 
NUM_SAMPLES);
 
 <h2>DataFrame API Examples</h2>
 <p>
-In Spark, a <a 
href="http://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes";>DataFrame</a>
+In Spark, a <a 
href="https://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes";>DataFrame</a>
 is a distributed collection of data organized into named columns.
 Users can use DataFrame API to perform various relational operations on both 
external
 data sources and Spark’s built-in distributed collections without providing 
specific procedures for processing data.
@@ -304,7 +304,7 @@ countsByAge.write().format("json").save("s3a://...");
 
 <h2>Machine Learning Example</h2>
 <p>
-<a href="http://spark.apache.org/docs/latest/mllib-guide.html";>MLlib</a>, 
Spark’s Machine Learning (ML) library, provides many distributed ML 
algorithms.
+<a href="https://spark.apache.org/docs/latest/mllib-guide.html";>MLlib</a>, 
Spark’s Machine Learning (ML) library, provides many distributed ML 
algorithms.
 These algorithms cover tasks such as feature extraction, classification, 
regression, clustering,
 recommendation, and more. 
 MLlib also provides tools such as ML Pipelines for building workflows, 
CrossValidator for tuning parameters,

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index 694c263..281d7ca 100644
--- a/faq.md
+++ b/faq.md
@@ -15,11 +15,11 @@ Spark is a fast and general processing engine compatible 
with Hadoop data. It ca
 
 <p class="question">Who is using Spark in production?</p>
 
-<p class="answer">As of 2016, surveys show that more than 1000 organizations 
are using Spark in production. Some of them are listed on the <a 
href="{{site.baseurl}}/powered-by.html">Powered By page</a> and at the <a 
href="http://spark-summit.org";>Spark Summit</a>.</p>
+<p class="answer">As of 2016, surveys show that more than 1000 organizations 
are using Spark in production. Some of them are listed on the <a 
href="{{site.baseurl}}/powered-by.html">Powered By page</a> and at the <a 
href="https://spark-summit.org";>Spark Summit</a>.</p>
 
 
 <p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">Many organizations run Spark on clusters of thousands of 
nodes. The largest cluster we know has 8000 of them. In terms of data size, 
Spark has been shown to work well up to petabytes. It has been used to sort 100 
TB of data 3X faster than Hadoop MapReduce on 1/10th of the machines, <a 
href="http://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html";>winning
 the 2014 Daytona GraySort Benchmark</a>, as well as to <a 
href="https://databricks.com/blog/2014/10/10/spark-petabyte-sort.html";>sort 1 
PB</a>. Several production workloads <a 
href="http://databricks.com/blog/2014/08/14/mining-graph-data-with-spark-at-alibaba-taobao.html";>use
 Spark to do ETL and data analysis on PBs of data</a>.</p>
+<p class="answer">Many organizations run Spark on clusters of thousands of 
nodes. The largest cluster we know has 8000 of them. In terms of data size, 
Spark has been shown to work well up to petabytes. It has been used to sort 100 
TB of data 3X faster than Hadoop MapReduce on 1/10th of the machines, <a 
href="https://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html";>winning
 the 2014 Daytona GraySort Benchmark</a>, as well as to <a 
href="https://databricks.com/blog/2014/10/10/spark-petabyte-sort.html";>sort 1 
PB</a>. Several production workloads <a 
href="https://databricks.com/blog/2014/08/14/mining-graph-data-with-spark-at-alibaba-taobao.html";>use
 Spark to do ETL and data analysis on PBs of data</a>.</p>
 
 <p class="question">Does my data need to fit in memory to use Spark?</p>
 
@@ -71,4 +71,4 @@ Please also refer to our
 
 <p class="question">Where can I get more help?</p>
 
-<p class="answer">Please post on StackOverflow's <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 tag or <a href="http://apache-spark-user-list.1001560.n3.nabble.com";>Spark 
Users</a> mailing list.  For more information, please refer to <a 
href="http://spark.apache.org/community.html#have-questions";>Have 
Questions?</a>.  We'll be glad to help!</p>
+<p class="answer">Please post on StackOverflow's <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 tag or <a href="https://apache-spark-user-list.1001560.n3.nabble.com";>Spark 
Users</a> mailing list.  For more information, please refer to <a 
href="https://spark.apache.org/community.html#have-questions";>Have 
Questions?</a>.  We'll be glad to help!</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/index.md
----------------------------------------------------------------------
diff --git a/index.md b/index.md
index 1f8f68b..1ccf483 100644
--- a/index.md
+++ b/index.md
@@ -113,9 +113,9 @@ navigation:
     </p>
 
     <p>
-      You can run Spark using its <a 
href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone cluster 
mode</a>, on <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2</a>, 
on <a 
href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/index.html";>Hadoop
 YARN</a>, or on <a href="http://mesos.apache.org";>Apache Mesos</a>.
-      Access data in <a 
href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="http://cassandra.apache.org";>Cassandra</a>, <a 
href="http://hbase.apache.org";>HBase</a>,
-      <a href="http://hive.apache.org";>Hive</a>, <a 
href="http://tachyon-project.org";>Tachyon</a>, and any Hadoop data source.
+      You can run Spark using its <a 
href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone cluster 
mode</a>, on <a href="https://github.com/amplab/spark-ec2";>EC2</a>, on <a 
href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html";>Hadoop
 YARN</a>, or on <a href="https://mesos.apache.org";>Apache Mesos</a>.
+      Access data in <a 
href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="https://cassandra.apache.org";>Cassandra</a>, <a 
href="https://hbase.apache.org";>HBase</a>,
+      <a href="https://hive.apache.org";>Hive</a>, <a 
href="http://tachyon-project.org";>Tachyon</a>, and any Hadoop data source.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
@@ -129,7 +129,7 @@ navigation:
 
     <p>
       Spark is used at a wide range of organizations to process large datasets.
-      You can find example use cases at the <a 
href="http://spark-summit.org/summit-2013/";>Spark Summit</a>
+      You can find example use cases at the <a 
href="https://spark-summit.org/summit-2013/";>Spark Summit</a>
       conference, or on the <a href="{{site.baseurl}}/powered-by.html">Powered 
By</a> page.
     </p>
 
@@ -139,7 +139,7 @@ navigation:
     <ul class="list-narrow">
       <li>Use the <a 
href="{{site.baseurl}}/community.html#mailing-lists">mailing lists</a> to ask 
questions.</li>
       <li>In-person events include numerous <a 
href="http://www.meetup.com/topics/apache-spark/";>meetup groups</a> and
-      <a href="http://spark-summit.org/";>Spark Summit</a>.</li>
+      <a href="https://spark-summit.org/";>Spark Summit</a>.</li>
       <li>We use <a 
href="https://issues.apache.org/jira/browse/SPARK";>JIRA</a> for issue 
tracking.</li>
     </ul>
   </div>
@@ -172,7 +172,7 @@ navigation:
       <li><a href="{{site.baseurl}}/downloads.html">Download</a> the latest 
release &mdash; you can run Spark locally on your laptop.</li>
       <li>Read the <a 
href="{{site.baseurl}}/docs/latest/quick-start.html">quick start guide</a>.</li>
       <li>
-        Spark Summit 2014 contained free <a 
href="http://spark-summit.org/2014/training";>training videos and exercises</a>.
+        Spark Summit 2014 contained free <a 
href="https://spark-summit.org/2014/training";>training videos and exercises</a>.
       </li>
       <li>Learn how to <a 
href="{{site.baseurl}}/docs/latest/#launching-on-a-cluster">deploy</a> Spark on 
a cluster.</li>
     </ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/mllib/index.md
----------------------------------------------------------------------
diff --git a/mllib/index.md b/mllib/index.md
index bccd603..b74eeee 100644
--- a/mllib/index.md
+++ b/mllib/index.md
@@ -68,8 +68,8 @@ subproject: MLlib
     <p>
       If you have a Hadoop 2 cluster, you can run Spark and MLlib without any 
pre-installation.
       Otherwise, Spark is easy to run <a 
href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone</a>
-      or on <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2</a> or 
<a href="http://mesos.apache.org";>Mesos</a>.
-      You can read from <a 
href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="http://hbase.apache.org";>HBase</a>, or any Hadoop data source.
+      or on <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2</a> or 
<a href="https://mesos.apache.org";>Mesos</a>.
+      You can read from <a 
href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="https://hbase.apache.org";>HBase</a>, or any Hadoop data source.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/powered-by.md
----------------------------------------------------------------------
diff --git a/powered-by.md b/powered-by.md
index 5ecfafb..3c3b95e 100644
--- a/powered-by.md
+++ b/powered-by.md
@@ -11,8 +11,8 @@ navigation:
 
 Organizations creating products and projects for use with Apache Spark, along 
with associated 
 marketing materials, should take care to respect the trademark in "Apache 
Spark" and its logo. 
-Please refer to <a href="http://www.apache.org/foundation/marks/";>ASF 
Trademarks Guidance</a> and 
-associated <a href="http://www.apache.org/foundation/marks/faq/";>FAQ</a> 
+Please refer to <a href="https://www.apache.org/foundation/marks/";>ASF 
Trademarks Guidance</a> and 
+associated <a href="https://www.apache.org/foundation/marks/faq/";>FAQ</a> 
 for comprehensive and authoritative guidance on proper usage of ASF trademarks.
 
 Names that do not include "Spark" at all have no potential trademark issue 
with the Spark project. 
@@ -90,17 +90,17 @@ and external data sources, driving holistic and actionable 
insights.
   anomaly detection, machine learning.
 - <a href="http://www.conviva.com";>Conviva</a> – Experience Live
   - See our talk at <a href="http://ampcamp.berkeley.edu/3/";>AmpCamp</a> on 
how we are 
-  <a 
href="http://www.youtube.com/watch?feature=player_detailpage&v=YaayAatdRNs";>using
 Spark to 
+  <a 
href="https://www.youtube.com/watch?feature=player_detailpage&v=YaayAatdRNs";>using
 Spark to 
   provide real time video optimization</a>
 - <a href="https://www.creditkarma.com/";>Credit Karma</a>
   - We create personalized experiences using Spark.
-- <a href="http://databricks.com";>Databricks</a>
+- <a href="https://databricks.com";>Databricks</a>
   - Formed by the creators of Apache Spark and Shark, Databricks is working to 
greatly expand these 
   open source projects and transform big data analysis in the process. We're 
deeply committed to 
   keeping all work on these systems open source.
   - We provided a hosted service to run Spark, 
   <a href="http://www.databricks.com/cloud";>Databricks Cloud</a>, and partner 
to 
-  <a href="http://databricks.com/support/";>support Apache Spark</a> with other 
Hadoop and big 
+  <a href="https://databricks.com/support/";>support Apache Spark</a> with 
other Hadoop and big 
   data companies.
 - <a href="http://dianping.com";>Dianping.com</a>
 - <a href="http://www.digby.com";>Digby</a>
@@ -220,8 +220,6 @@ efficiently.
   - Automatic pulling of all your data in to Spark for enterprise 
visualisation, predictive 
   analytics and data exploration at a low cost.
 - <a href="http://www.trueffect.com";>TruEffect Inc</a>
-- <a href="http://www.tuplejump.com";>Tuplejump</a>
-  - Software development partners for Apache Spark and Cassandra projects
 - <a href="http://www.ucsc.edu";>UC Santa Cruz</a>
 - <a href="http://missouri.edu/";>University of Missouri Data Analytics and 
Discover Lab</a>
 - <a href="http://videoamp.com/";>VideoAmp</a>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/release-process.md
----------------------------------------------------------------------
diff --git a/release-process.md b/release-process.md
index 28ecaad..a98dc80 100644
--- a/release-process.md
+++ b/release-process.md
@@ -81,9 +81,9 @@ The recommended process is to ask the previous release 
manager to walk you throu
 
 The release voting takes place on the Apache Spark developers list (the PMC is 
voting). 
 Look at past voting threads to see how this proceeds. The email should follow 
-<a 
href="http://mail-archives.apache.org/mod_mbox/spark-dev/201407.mbox/%3ccabpqxss7cf+yauuxck0jnush4207hcp4dkwn3bwfsvdnd86...@mail.gmail.com%3e";>this
 format</a>.
+<a 
href="https://mail-archives.apache.org/mod_mbox/spark-dev/201407.mbox/%3ccabpqxss7cf+yauuxck0jnush4207hcp4dkwn3bwfsvdnd86...@mail.gmail.com%3e";>this
 format</a>.
 
-- Make a shortened link to the full list of JIRAs using <a 
href="http://s.apache.org/";>http://s.apache.org/</a>
+- Make a shortened link to the full list of JIRAs using <a 
href="https://s.apache.org/";>https://s.apache.org/</a>
 - If possible, attach a draft of the release notes with the email
 - Make sure the voting closing time is in UTC format. Use this script to 
generate it
 - Make sure the email is in text format and the links are correct
@@ -126,7 +126,7 @@ $ export SVN_EDITOR=vim
 $ svn mv https://dist.apache.org/repos/dist/dev/spark/spark-1.1.1-rc2 
https://dist.apache.org/repos/dist/release/spark/spark-1.1.1
 ```
 
-Verify that the resources are present in <a 
href="http://www.apache.org/dist/spark/";>http://www.apache.org/dist/spark/</a>.
+Verify that the resources are present in <a 
href="https://www.apache.org/dist/spark/";>https://www.apache.org/dist/spark/</a>.
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network. 
 There are a few remaining steps.
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/robots.txt
----------------------------------------------------------------------
diff --git a/robots.txt b/robots.txt
index 0a73784..1ff1091 100644
--- a/robots.txt
+++ b/robots.txt
@@ -1 +1 @@
-Sitemap: http://spark.apache.org/sitemap.xml
+Sitemap: https://spark.apache.org/sitemap.xml

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/community.html
----------------------------------------------------------------------
diff --git a/site/community.html b/site/community.html
index a4be6db..e6ec06a 100644
--- a/site/community.html
+++ b/site/community.html
@@ -201,7 +201,7 @@
 <h4>StackOverflow</h4>
 
 <p>For usage questions and help (e.g. how to use this Spark API), it is 
recommended you use the 
-StackOverflow tag <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 
+StackOverflow tag <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 
 as it is an active forum for Spark users&#8217; questions and answers.</p>
 
 <p>Some quick tips when using StackOverflow:</p>
@@ -210,13 +210,13 @@ as it is an active forum for Spark users&#8217; questions 
and answers.</p>
   <li>Prior to asking submitting questions, please:
     <ul>
       <li>Search StackOverflow&#8217;s 
-<a 
href="http://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 tag to see if 
+<a 
href="https://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 tag to see if 
 your question has already been answered</li>
       <li>Search the nabble archive for
 <a 
href="http://apache-spark-user-list.1001560.n3.nabble.com/";>us...@spark.apache.org</a></li>
     </ul>
   </li>
-  <li>Please follow the StackOverflow <a 
href="http://stackoverflow.com/help/how-to-ask";>code of conduct</a></li>
+  <li>Please follow the StackOverflow <a 
href="https://stackoverflow.com/help/how-to-ask";>code of conduct</a></li>
   <li>Always use the <code>apache-spark</code> tag when asking questions</li>
   <li>Please also use a secondary tag to specify components so subject matter 
experts can more easily find them.
  Examples include: <code>pyspark</code>, <code>spark-dataframe</code>, 
<code>spark-streaming</code>, <code>spark-r</code>, <code>spark-mllib</code>, 
@@ -251,7 +251,7 @@ project, and scenarios, it is recommended you use the 
u...@spark.apache.org mail
 <ul>
   <li>Prior to asking submitting questions, please:
     <ul>
-      <li>Search StackOverflow at <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 
+      <li>Search StackOverflow at <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 
 to see if your question has already been answered</li>
       <li>Search the nabble archive for
 <a 
href="http://apache-spark-user-list.1001560.n3.nabble.com/";>us...@spark.apache.org</a></li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/contributing.html
----------------------------------------------------------------------
diff --git a/site/contributing.html b/site/contributing.html
index 7239faf..4eeb5df 100644
--- a/site/contributing.html
+++ b/site/contributing.html
@@ -229,7 +229,7 @@ feedback on any performance or correctness issues found in 
the newer release.</p
 <h2>Contributing by Reviewing Changes</h2>
 
 <p>Changes to Spark source code are proposed, reviewed and committed via 
-<a href="http://github.com/apache/spark/pulls";>Github pull requests</a> 
(described later). 
+<a href="https://github.com/apache/spark/pulls";>Github pull requests</a> 
(described later). 
 Anyone can view and comment on active changes here. 
 Reviewing others&#8217; changes is a good way to learn how the change process 
works and gain exposure 
 to activity in various parts of the code. You can help by reviewing the 
changes and asking 
@@ -260,7 +260,7 @@ learning algorithms can happily exist outside of MLlib.</p>
 
 <p>To that end, large and independent new functionality is often rejected for 
inclusion in Spark 
 itself, but, can and should be hosted as a separate project and repository, 
and included in 
-the <a href="http://spark-packages.org/";>spark-packages.org</a> collection.</p>
+the <a href="https://spark-packages.org/";>spark-packages.org</a> 
collection.</p>
 
 <h2>Contributing Bug Reports</h2>
 
@@ -275,7 +275,7 @@ first. Unreproducible bugs, or simple error reports, may be 
closed.</p>
 
 <p>It is possible to propose new features as well. These are generally not 
helpful unless 
 accompanied by detail, such as a design document and/or code change. Large new 
contributions 
-should consider <a href="http://spark-packages.org/";>spark-packages.org</a> 
first (see above), 
+should consider <a href="https://spark-packages.org/";>spark-packages.org</a> 
first (see above), 
 or be discussed on the mailing 
 list first. Feature requests may be rejected, or closed after a long period of 
inactivity.</p>
 
@@ -398,7 +398,7 @@ rather than receive iterations of review.</p>
   <li>Introduces complex new functionality, especially an API that needs to be 
supported</li>
   <li>Adds complexity that only helps a niche use case</li>
   <li>Adds user-space functionality that does not need to be maintained in 
Spark, but could be hosted 
-externally and indexed by <a 
href="http://spark-packages.org/";>spark-packages.org</a></li>
+externally and indexed by <a 
href="https://spark-packages.org/";>spark-packages.org</a></li>
   <li>Changes a public API or semantics (rarely allowed)</li>
   <li>Adds large dependencies</li>
   <li>Changes versions of existing dependencies</li>
@@ -483,7 +483,7 @@ Example: <code>Fix typos in Foo scaladoc</code></li>
 
 <ol>
   <li><a href="https://help.github.com/articles/fork-a-repo/";>Fork</a> the 
Github repository at 
-<a href="http://github.com/apache/spark";>http://github.com/apache/spark</a> if 
you haven&#8217;t already</li>
+<a href="https://github.com/apache/spark";>https://github.com/apache/spark</a> 
if you haven&#8217;t already</li>
   <li>Clone your fork, create a new branch, push commits to the branch.</li>
   <li>Consider whether documentation or tests need to be added or updated as 
part of the change, 
 and add them as needed.</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/developer-tools.html
----------------------------------------------------------------------
diff --git a/site/developer-tools.html b/site/developer-tools.html
index dfd35eb..c1a55a9 100644
--- a/site/developer-tools.html
+++ b/site/developer-tools.html
@@ -449,7 +449,7 @@ free IntelliJ Ultimate Edition licenses) and install the 
JetBrains Scala plugin
   <li>In the Import wizard, it&#8217;s fine to leave settings at their 
default. However it is usually useful 
 to enable &#8220;Import Maven projects automatically&#8221;, since changes to 
the project structure will 
 automatically update the IntelliJ project.</li>
-  <li>As documented in <a 
href="http://spark.apache.org/docs/latest/building-spark.html";>Building 
Spark</a>, 
+  <li>As documented in <a 
href="https://spark.apache.org/docs/latest/building-spark.html";>Building 
Spark</a>, 
 some build configurations require specific profiles to be 
 enabled. The same profiles that are enabled with <code>-P[profile name]</code> 
above may be enabled on the 
 Profiles screen in the Import wizard. For example, if developing for Hadoop 
2.7 with YARN support, 
@@ -540,15 +540,15 @@ may have bugs and have not undergone the same level of 
testing as releases. Spar
 are available at:</p>
 
 <ul>
-  <li>Latest master build: <a 
href="http://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest";>http://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest</a></li>
-  <li>All nightly builds: <a 
href="http://people.apache.org/~pwendell/spark-nightly/";>http://people.apache.org/~pwendell/spark-nightly/</a></li>
+  <li>Latest master build: <a 
href="https://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest";>https://people.apache.org/~pwendell/spark-nightly/spark-master-bin/latest</a></li>
+  <li>All nightly builds: <a 
href="https://people.apache.org/~pwendell/spark-nightly/";>https://people.apache.org/~pwendell/spark-nightly/</a></li>
 </ul>
 
 <p>Spark also publishes SNAPSHOT releases of its Maven artifacts for both 
master and maintenance 
 branches on a nightly basis. To link to a SNAPSHOT you need to add the ASF 
snapshot 
 repository to your build. Note that SNAPSHOT artifacts are ephemeral and may 
change or
 be removed. To use these you must add the ASF snapshot repository at 
-<a 
href="http://repository.apache.org/snapshots/";>http://repository.apache.org/snapshots/<a>.</a></a></p>
+&lt;a href=&#8221;https://repository.apache.org/snapshots/<a>.</a></p>
 
 <pre><code>groupId: org.apache.spark
 artifactId: spark-core_2.10

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/documentation.html
----------------------------------------------------------------------
diff --git a/site/documentation.html b/site/documentation.html
index 32112ba..758862d 100644
--- a/site/documentation.html
+++ b/site/documentation.html
@@ -236,7 +236,7 @@
 <p>In addition, this page lists other resources for learning Spark.</p>
 
 <h3>Videos</h3>
-<p>See the <a 
href="http://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w";>Apache Spark 
YouTube Channel</a> for videos from Spark events. There are separate <a 
href="http://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w/playlists";>playlists</a>
 for videos of different topics. Besides browsing through playlists, you can 
also find direct links to videos below.</p>
+<p>See the <a 
href="https://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w";>Apache Spark 
YouTube Channel</a> for videos from Spark events. There are separate <a 
href="https://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w/playlists";>playlists</a>
 for videos of different topics. Besides browsing through playlists, you can 
also find direct links to videos below.</p>
 
 <h4>Screencast Tutorial Videos</h4>
 <ul>
@@ -251,17 +251,17 @@
 <ul>
   <li>Videos from Spark Summit 2014, San Francisco, June 30 - July 2 2013
     <ul>
-      <li><a href="http://spark-summit.org/2014/agenda";>Full agenda with links 
to all videos and slides</a></li>
-      <li><a href="http://spark-summit.org/2014/training";>Training videos and 
slides</a></li>
+      <li><a href="https://spark-summit.org/2014/agenda";>Full agenda with 
links to all videos and slides</a></li>
+      <li><a href="https://spark-summit.org/2014/training";>Training videos and 
slides</a></li>
     </ul>
   </li>
   <li>Videos from Spark Summit 2013, San Francisco, Dec 2-3 2013
     <ul>
-      <li><a href="http://spark-summit.org/2013#agendapluginwidget-4";>Full 
agenda with links to all videos and slides</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwjXj33QvAXN0Vlx0gc6u0je";>YouTube
 playist of all Keynotes</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track A (Spark Applications)</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track B (Spark Deployment, Scheduling &amp; Perf, Related 
projects)</a></li>
-      <li><a 
href="http://www.youtube.com/playlist?list=PL-x35fyliRwjR1Umntxz52zv3EcKpbzCp";>YouTube
 playist of the Training Day (i.e. the 2nd day of the summit)</a></li>
+      <li><a href="https://spark-summit.org/2013#agendapluginwidget-4";>Full 
agenda with links to all videos and slides</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwjXj33QvAXN0Vlx0gc6u0je";>YouTube
 playist of all Keynotes</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track A (Spark Applications)</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwiNcKwIkDEQZBejiqxEJ79U";>YouTube
 playist of Track B (Spark Deployment, Scheduling &amp; Perf, Related 
projects)</a></li>
+      <li><a 
href="https://www.youtube.com/playlist?list=PL-x35fyliRwjR1Umntxz52zv3EcKpbzCp";>YouTube
 playist of the Training Day (i.e. the 2nd day of the summit)</a></li>
     </ul>
   </li>
 </ul>
@@ -275,21 +275,21 @@
 </style>
 
 <ul>
-  <li><a 
href="http://www.youtube.com/watch?v=NUQ-8to2XAk&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Spark
 1.0 and Beyond</a> (<a 
href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt";>slides</a>) 
<span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 
2014-04-23</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=NUQ-8to2XAk&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Spark
 1.0 and Beyond</a> (<a 
href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt";>slides</a>) 
<span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 
2014-04-23</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=ju2OQEXqONU&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Adding
 Native SQL Support to Spark with Catalyst</a> (<a 
href="http://files.meetup.com/3138542/Spark%20SQL%20Meetup%20-%204-8-2012.pdf";>slides</a>)
 <span class="video-meta-info">by Michael Armbrust, at Tagged in SF, 
2014-04-08</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=ju2OQEXqONU&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Adding
 Native SQL Support to Spark with Catalyst</a> (<a 
href="http://files.meetup.com/3138542/Spark%20SQL%20Meetup%20-%204-8-2012.pdf";>slides</a>)
 <span class="video-meta-info">by Michael Armbrust, at Tagged in SF, 
2014-04-08</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=MY0NkZY_tJw&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>SparkR
 and GraphX</a> (slides: <a 
href="http://files.meetup.com/3138542/SparkR-meetup.pdf";>SparkR</a>, <a 
href="http://files.meetup.com/3138542/graphx%40spark_meetup03_2014.pdf";>GraphX</a>)
 <span class="video-meta-info">by Shivaram Venkataraman &amp; Dan Crankshaw, at 
SkyDeck in Berkeley, 2014-03-25</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=MY0NkZY_tJw&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>SparkR
 and GraphX</a> (slides: <a 
href="http://files.meetup.com/3138542/SparkR-meetup.pdf";>SparkR</a>, <a 
href="http://files.meetup.com/3138542/graphx%40spark_meetup03_2014.pdf";>GraphX</a>)
 <span class="video-meta-info">by Shivaram Venkataraman &amp; Dan Crankshaw, at 
SkyDeck in Berkeley, 2014-03-25</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=5niXiiEX5pE&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Simple
 deployment w/ SIMR &amp; Advanced Shark Analytics w/ TGFs</a> (<a 
href="http://files.meetup.com/3138542/tgf.pptx";>slides</a>) <span 
class="video-meta-info">by Ali Ghodsi, at Huawei in Santa Clara, 
2014-02-05</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=5niXiiEX5pE&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Simple
 deployment w/ SIMR &amp; Advanced Shark Analytics w/ TGFs</a> (<a 
href="http://files.meetup.com/3138542/tgf.pptx";>slides</a>) <span 
class="video-meta-info">by Ali Ghodsi, at Huawei in Santa Clara, 
2014-02-05</span></li>
 
-  <li><a 
href="http://www.youtube.com/watch?v=C7gWtxelYNM&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Stores,
 Monoids &amp; Dependency Injection - Abstractions for Spark</a> (<a 
href="http://files.meetup.com/3138542/Abstractions%20for%20spark%20streaming%20-%20spark%20meetup%20presentation.pdf";>slides</a>)
 <span class="video-meta-info">by Ryan Weald, at Sharethrough in SF, 
2014-01-17</span></li>
+  <li><a 
href="https://www.youtube.com/watch?v=C7gWtxelYNM&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a";>Stores,
 Monoids &amp; Dependency Injection - Abstractions for Spark</a> (<a 
href="http://files.meetup.com/3138542/Abstractions%20for%20spark%20streaming%20-%20spark%20meetup%20presentation.pdf";>slides</a>)
 <span class="video-meta-info">by Ryan Weald, at Sharethrough in SF, 
2014-01-17</span></li>
 
   <li><a href="https://www.youtube.com/watch?v=IxDnF_X4M-8";>Distributed 
Machine Learning using MLbase</a> (<a 
href="http://files.meetup.com/3138542/sparkmeetup_8_6_13_final_reduced.pdf";>slides</a>)
 <span class="video-meta-info">by Evan Sparks &amp; Ameet Talwalkar, at Twitter 
in SF, 2013-08-06</span></li>
 
   <li><a href="https://www.youtube.com/watch?v=vJQ2RZj9hqs";>GraphX Preview: 
Graph Analysis on Spark</a> <span class="video-meta-info">by Reynold Xin &amp; 
Joseph Gonzalez, at Flurry in SF, 2013-07-02</span></li>
 
-  <li><a href="http://www.youtube.com/watch?v=D1knCQZQQnw";>Deep Dive with 
Spark Streaming</a> (<a 
href="http://www.slideshare.net/spark-project/deep-divewithsparkstreaming-tathagatadassparkmeetup20130617";>slides</a>)
 <span class="video-meta-info">by Tathagata Das, at Plug and Play in Sunnyvale, 
2013-06-17</span></li>
+  <li><a href="https://www.youtube.com/watch?v=D1knCQZQQnw";>Deep Dive with 
Spark Streaming</a> (<a 
href="http://www.slideshare.net/spark-project/deep-divewithsparkstreaming-tathagatadassparkmeetup20130617";>slides</a>)
 <span class="video-meta-info">by Tathagata Das, at Plug and Play in Sunnyvale, 
2013-06-17</span></li>
 
   <li><a href="https://www.youtube.com/watch?v=cAZ624-69PQ";>Tachyon and Shark 
update</a> (slides: <a 
href="http://files.meetup.com/3138542/2013-05-09%20Shark%20%40%20Spark%20Meetup.pdf";>Shark</a>,
 <a 
href="http://files.meetup.com/3138542/Tachyon_2013-05-09_Spark_Meetup.pdf";>Tachyon</a>)
 <span class="video-meta-info">by Ali Ghodsi, Haoyuan Li, Reynold Xin, Google 
Ventures, 2013-05-09</span></li>
 
@@ -305,9 +305,9 @@
 <p><a name="summit"></a></p>
 <h3>Training Materials</h3>
 <ul>
-  <li><a href="http://spark-summit.org/2014/training";>Training materials and 
exercises from Spark Summit 2014</a> are available online. These include videos 
and slides of talks as well as exercises you can run on your laptop. Topics 
include Spark core, tuning and debugging, Spark SQL, Spark Streaming, GraphX 
and MLlib.</li>
-  <li><a href="http://spark-summit.org/2013";>Spark Summit 2013</a> included a 
training session, with slides and videos available on <a 
href="http://spark-summit.org/summit-2013/#agendapluginwidget-5";>the training 
day agenda</a>.
-    The session also included <a 
href="http://spark-summit.org/2013/exercises/";>exercises</a> that you can walk 
through on Amazon EC2.</li>
+  <li><a href="https://spark-summit.org/2014/training";>Training materials and 
exercises from Spark Summit 2014</a> are available online. These include videos 
and slides of talks as well as exercises you can run on your laptop. Topics 
include Spark core, tuning and debugging, Spark SQL, Spark Streaming, GraphX 
and MLlib.</li>
+  <li><a href="https://spark-summit.org/2013";>Spark Summit 2013</a> included a 
training session, with slides and videos available on <a 
href="https://spark-summit.org/summit-2013/#agendapluginwidget-5";>the training 
day agenda</a>.
+    The session also included <a 
href="https://spark-summit.org/2013/exercises/";>exercises</a> that you can walk 
through on Amazon EC2.</li>
   <li>The <a href="https://amplab.cs.berkeley.edu/";>UC Berkeley AMPLab</a> 
regularly hosts training camps on Spark and related projects.
 Slides, videos and EC2-based exercises from each of these are available online:
 <ul>
@@ -322,8 +322,8 @@ Slides, videos and EC2-based exercises from each of these 
are available online:
 <h3>Hands-On Exercises</h3>
 
 <ul>
-  <li><a href="http://spark-summit.org/2014/training";>Hands-on exercises from 
Spark Summit 2014</a>. These let you install Spark on your laptop and learn 
basic concepts, Spark SQL, Spark Streaming, GraphX and MLlib.</li>
-  <li><a href="http://spark-summit.org/2013/exercises/";>Hands-on exercises 
from Spark Summit 2013</a>. These exercises let you launch a small EC2 cluster, 
load a dataset, and query it with Spark, Shark, Spark Streaming, and MLlib.</li>
+  <li><a href="https://spark-summit.org/2014/training";>Hands-on exercises from 
Spark Summit 2014</a>. These let you install Spark on your laptop and learn 
basic concepts, Spark SQL, Spark Streaming, GraphX and MLlib.</li>
+  <li><a href="https://spark-summit.org/2013/exercises/";>Hands-on exercises 
from Spark Summit 2013</a>. These exercises let you launch a small EC2 cluster, 
load a dataset, and query it with Spark, Shark, Spark Streaming, and MLlib.</li>
 </ul>
 
 <h3>External Tutorials, Blog Posts, and Talks</h3>
@@ -331,7 +331,7 @@ Slides, videos and EC2-based exercises from each of these 
are available online:
 <ul>
   <li><a 
href="http://engineering.ooyala.com/blog/using-parquet-and-scrooge-spark";>Using 
Parquet and Scrooge with Spark</a> &mdash; Scala-friendly Parquet and Avro 
usage tutorial from Ooyala's Evan Chan</li>
   <li><a 
href="http://codeforhire.com/2014/02/18/using-spark-with-mongodb/";>Using Spark 
with MongoDB</a> &mdash; by Sampo Niskanen from Wellmo</li>
-  <li><a href="http://spark-summit.org/2013";>Spark Summit 2013</a> &mdash; 
contained 30 talks about Spark use cases, available as slides and videos</li>
+  <li><a href="https://spark-summit.org/2013";>Spark Summit 2013</a> &mdash; 
contained 30 talks about Spark use cases, available as slides and videos</li>
   <li><a href="http://zenfractal.com/2013/08/21/a-powerful-big-data-trio/";>A 
Powerful Big Data Trio: Spark, Parquet and Avro</a> &mdash; Using Parquet in 
Spark by Matt Massie</li>
   <li><a 
href="http://www.slideshare.net/EvanChan2/cassandra2013-spark-talk-final";>Real-time
 Analytics with Cassandra, Spark, and Shark</a> &mdash; Presentation by Evan 
Chan from Ooyala at 2013 Cassandra Summit</li>
   <li><a 
href="http://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923";>Run 
Spark and Shark on Amazon Elastic MapReduce</a> &mdash; Article by Amazon 
Elastic MapReduce team member Parviz Deyhim</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/downloads.html
----------------------------------------------------------------------
diff --git a/site/downloads.html b/site/downloads.html
index 5134991..e04b014 100644
--- a/site/downloads.html
+++ b/site/downloads.html
@@ -225,7 +225,7 @@ $(document).ready(function() {
 
 <p><em>Note: Starting version 2.0, Spark is built with Scala 2.11 by default.
 Scala 2.10 users should download the Spark source package and build
-<a 
href="http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-210";>with
 Scala 2.10 support</a>.</em></p>
+<a 
href="https://spark.apache.org/docs/latest/building-spark.html#building-for-scala-210";>with
 Scala 2.10 support</a>.</em></p>
 
 <!--
 ### Latest Preview Release
@@ -241,7 +241,7 @@ You can select and download it above.
 -->
 
 <h3 id="link-with-spark">Link with Spark</h3>
-<p>Spark artifacts are <a 
href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.spark%22";>hosted
 in Maven Central</a>. You can add a Maven dependency with the following 
coordinates:</p>
+<p>Spark artifacts are <a 
href="https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.spark%22";>hosted
 in Maven Central</a>. You can add a Maven dependency with the following 
coordinates:</p>
 
 <pre><code>groupId: org.apache.spark
 artifactId: spark-core_2.11

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/examples.html
----------------------------------------------------------------------
diff --git a/site/examples.html b/site/examples.html
index 4c69cd1..9545023 100644
--- a/site/examples.html
+++ b/site/examples.html
@@ -198,13 +198,13 @@
 <p>These examples give a quick overview of the Spark API.
 Spark is built on the concept of <em>distributed datasets</em>, which contain 
arbitrary Java or
 Python objects. You create a dataset from external data, then apply parallel 
operations
-to it. The building block of the Spark API is its <a 
href="http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds";>RDD
 API</a>.
+to it. The building block of the Spark API is its <a 
href="https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds";>RDD
 API</a>.
 In the RDD API,
 there are two types of operations: <em>transformations</em>, which define a 
new dataset based on previous ones,
 and <em>actions</em>, which kick off a job to execute on a cluster.
 On top of Spark’s RDD API, high level APIs are provided, e.g.
-<a 
href="http://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes";>DataFrame
 API</a> and
-<a href="http://spark.apache.org/docs/latest/mllib-guide.html";>Machine 
Learning API</a>.
+<a 
href="https://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes";>DataFrame
 API</a> and
+<a href="https://spark.apache.org/docs/latest/mllib-guide.html";>Machine 
Learning API</a>.
 These high level APIs provide a concise way to conduct certain data operations.
 In this page, we will show examples using RDD API as well as examples using 
high level APIs.</p>
 
@@ -316,7 +316,7 @@ In this page, we will show examples using RDD API as well 
as examples using high
 
 <h2>DataFrame API Examples</h2>
 <p>
-In Spark, a <a 
href="http://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes";>DataFrame</a>
+In Spark, a <a 
href="https://spark.apache.org/docs/latest/sql-programming-guide.html#dataframes";>DataFrame</a>
 is a distributed collection of data organized into named columns.
 Users can use DataFrame API to perform various relational operations on both 
external
 data sources and Spark’s built-in distributed collections without providing 
specific procedures for processing data.
@@ -491,7 +491,7 @@ A simple MySQL table "people" is used in the example and 
this table has two colu
 
 <h2>Machine Learning Example</h2>
 <p>
-<a href="http://spark.apache.org/docs/latest/mllib-guide.html";>MLlib</a>, 
Spark’s Machine Learning (ML) library, provides many distributed ML 
algorithms.
+<a href="https://spark.apache.org/docs/latest/mllib-guide.html";>MLlib</a>, 
Spark’s Machine Learning (ML) library, provides many distributed ML 
algorithms.
 These algorithms cover tasks such as feature extraction, classification, 
regression, clustering,
 recommendation, and more. 
 MLlib also provides tools such as ML Pipelines for building workflows, 
CrossValidator for tuning parameters,

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/faq.html
----------------------------------------------------------------------
diff --git a/site/faq.html b/site/faq.html
index c537fe9..40440ac 100644
--- a/site/faq.html
+++ b/site/faq.html
@@ -202,10 +202,10 @@ Spark is a fast and general processing engine compatible 
with Hadoop data. It ca
 
 <p class="question">Who is using Spark in production?</p>
 
-<p class="answer">As of 2016, surveys show that more than 1000 organizations 
are using Spark in production. Some of them are listed on the <a 
href="/powered-by.html">Powered By page</a> and at the <a 
href="http://spark-summit.org";>Spark Summit</a>.</p>
+<p class="answer">As of 2016, surveys show that more than 1000 organizations 
are using Spark in production. Some of them are listed on the <a 
href="/powered-by.html">Powered By page</a> and at the <a 
href="https://spark-summit.org";>Spark Summit</a>.</p>
 
 <p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">Many organizations run Spark on clusters of thousands of 
nodes. The largest cluster we know has 8000 of them. In terms of data size, 
Spark has been shown to work well up to petabytes. It has been used to sort 100 
TB of data 3X faster than Hadoop MapReduce on 1/10th of the machines, <a 
href="http://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html";>winning
 the 2014 Daytona GraySort Benchmark</a>, as well as to <a 
href="https://databricks.com/blog/2014/10/10/spark-petabyte-sort.html";>sort 1 
PB</a>. Several production workloads <a 
href="http://databricks.com/blog/2014/08/14/mining-graph-data-with-spark-at-alibaba-taobao.html";>use
 Spark to do ETL and data analysis on PBs of data</a>.</p>
+<p class="answer">Many organizations run Spark on clusters of thousands of 
nodes. The largest cluster we know has 8000 of them. In terms of data size, 
Spark has been shown to work well up to petabytes. It has been used to sort 100 
TB of data 3X faster than Hadoop MapReduce on 1/10th of the machines, <a 
href="https://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html";>winning
 the 2014 Daytona GraySort Benchmark</a>, as well as to <a 
href="https://databricks.com/blog/2014/10/10/spark-petabyte-sort.html";>sort 1 
PB</a>. Several production workloads <a 
href="https://databricks.com/blog/2014/08/14/mining-graph-data-with-spark-at-alibaba-taobao.html";>use
 Spark to do ETL and data analysis on PBs of data</a>.</p>
 
 <p class="question">Does my data need to fit in memory to use Spark?</p>
 
@@ -257,7 +257,7 @@ Please also refer to our
 
 <p class="question">Where can I get more help?</p>
 
-<p class="answer">Please post on StackOverflow's <a 
href="http://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 tag or <a href="http://apache-spark-user-list.1001560.n3.nabble.com";>Spark 
Users</a> mailing list.  For more information, please refer to <a 
href="http://spark.apache.org/community.html#have-questions";>Have 
Questions?</a>.  We'll be glad to help!</p>
+<p class="answer">Please post on StackOverflow's <a 
href="https://stackoverflow.com/questions/tagged/apache-spark";><code>apache-spark</code></a>
 tag or <a href="https://apache-spark-user-list.1001560.n3.nabble.com";>Spark 
Users</a> mailing list.  For more information, please refer to <a 
href="https://spark.apache.org/community.html#have-questions";>Have 
Questions?</a>.  We'll be glad to help!</p>
 </p>
 
   </div>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/index.html
----------------------------------------------------------------------
diff --git a/site/index.html b/site/index.html
index 9b0c4bb..1044faa 100644
--- a/site/index.html
+++ b/site/index.html
@@ -293,9 +293,9 @@
     </p>
 
     <p>
-      You can run Spark using its <a 
href="/docs/latest/spark-standalone.html">standalone cluster mode</a>, on <a 
href="/docs/latest/ec2-scripts.html">EC2</a>, on <a 
href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/index.html";>Hadoop
 YARN</a>, or on <a href="http://mesos.apache.org";>Apache Mesos</a>.
-      Access data in <a 
href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="http://cassandra.apache.org";>Cassandra</a>, <a 
href="http://hbase.apache.org";>HBase</a>,
-      <a href="http://hive.apache.org";>Hive</a>, <a 
href="http://tachyon-project.org";>Tachyon</a>, and any Hadoop data source.
+      You can run Spark using its <a 
href="/docs/latest/spark-standalone.html">standalone cluster mode</a>, on <a 
href="https://github.com/amplab/spark-ec2";>EC2</a>, on <a 
href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html";>Hadoop
 YARN</a>, or on <a href="https://mesos.apache.org";>Apache Mesos</a>.
+      Access data in <a 
href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="https://cassandra.apache.org";>Cassandra</a>, <a 
href="https://hbase.apache.org";>HBase</a>,
+      <a href="https://hive.apache.org";>Hive</a>, <a 
href="http://tachyon-project.org";>Tachyon</a>, and any Hadoop data source.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
@@ -309,7 +309,7 @@
 
     <p>
       Spark is used at a wide range of organizations to process large datasets.
-      You can find example use cases at the <a 
href="http://spark-summit.org/summit-2013/";>Spark Summit</a>
+      You can find example use cases at the <a 
href="https://spark-summit.org/summit-2013/";>Spark Summit</a>
       conference, or on the <a href="/powered-by.html">Powered By</a> page.
     </p>
 
@@ -319,7 +319,7 @@
     <ul class="list-narrow">
       <li>Use the <a href="/community.html#mailing-lists">mailing lists</a> to 
ask questions.</li>
       <li>In-person events include numerous <a 
href="http://www.meetup.com/topics/apache-spark/";>meetup groups</a> and
-      <a href="http://spark-summit.org/";>Spark Summit</a>.</li>
+      <a href="https://spark-summit.org/";>Spark Summit</a>.</li>
       <li>We use <a 
href="https://issues.apache.org/jira/browse/SPARK";>JIRA</a> for issue 
tracking.</li>
     </ul>
   </div>
@@ -352,7 +352,7 @@
       <li><a href="/downloads.html">Download</a> the latest release &mdash; 
you can run Spark locally on your laptop.</li>
       <li>Read the <a href="/docs/latest/quick-start.html">quick start 
guide</a>.</li>
       <li>
-        Spark Summit 2014 contained free <a 
href="http://spark-summit.org/2014/training";>training videos and exercises</a>.
+        Spark Summit 2014 contained free <a 
href="https://spark-summit.org/2014/training";>training videos and exercises</a>.
       </li>
       <li>Learn how to <a 
href="/docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster.</li>
     </ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/mailing-lists.html
----------------------------------------------------------------------
diff --git a/site/mailing-lists.html b/site/mailing-lists.html
index 38f6199..4a5535e 100644
--- a/site/mailing-lists.html
+++ b/site/mailing-lists.html
@@ -12,7 +12,7 @@
 
   
     <meta http-equiv="refresh" content="0; url=/community.html">
-    <link rel="canonical" href="http://spark.apache.org/community.html"; />
+    <link rel="canonical" href="https://spark.apache.org/community.html"; />
   
 
   

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/mllib/index.html
----------------------------------------------------------------------
diff --git a/site/mllib/index.html b/site/mllib/index.html
index 8e17fb4..d279895 100644
--- a/site/mllib/index.html
+++ b/site/mllib/index.html
@@ -258,8 +258,8 @@
     <p>
       If you have a Hadoop 2 cluster, you can run Spark and MLlib without any 
pre-installation.
       Otherwise, Spark is easy to run <a 
href="/docs/latest/spark-standalone.html">standalone</a>
-      or on <a href="/docs/latest/ec2-scripts.html">EC2</a> or <a 
href="http://mesos.apache.org";>Mesos</a>.
-      You can read from <a 
href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="http://hbase.apache.org";>HBase</a>, or any Hadoop data source.
+      or on <a href="/docs/latest/ec2-scripts.html">EC2</a> or <a 
href="https://mesos.apache.org";>Mesos</a>.
+      You can read from <a 
href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html";>HDFS</a>,
 <a href="https://hbase.apache.org";>HBase</a>, or any Hadoop data source.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/powered-by.html
----------------------------------------------------------------------
diff --git a/site/powered-by.html b/site/powered-by.html
index 30c5f64..be06552 100644
--- a/site/powered-by.html
+++ b/site/powered-by.html
@@ -197,8 +197,8 @@
 
 <p>Organizations creating products and projects for use with Apache Spark, 
along with associated 
 marketing materials, should take care to respect the trademark in 
&#8220;Apache Spark&#8221; and its logo. 
-Please refer to <a href="http://www.apache.org/foundation/marks/";>ASF 
Trademarks Guidance</a> and 
-associated <a href="http://www.apache.org/foundation/marks/faq/";>FAQ</a> 
+Please refer to <a href="https://www.apache.org/foundation/marks/";>ASF 
Trademarks Guidance</a> and 
+associated <a href="https://www.apache.org/foundation/marks/faq/";>FAQ</a> 
 for comprehensive and authoritative guidance on proper usage of ASF 
trademarks.</p>
 
 <p>Names that do not include &#8220;Spark&#8221; at all have no potential 
trademark issue with the Spark project. 
@@ -311,7 +311,7 @@ anomaly detection, machine learning.</li>
   <li><a href="http://www.conviva.com";>Conviva</a> – Experience Live
     <ul>
       <li>See our talk at <a href="http://ampcamp.berkeley.edu/3/";>AmpCamp</a> 
on how we are 
-<a 
href="http://www.youtube.com/watch?feature=player_detailpage&amp;v=YaayAatdRNs";>using
 Spark to 
+<a 
href="https://www.youtube.com/watch?feature=player_detailpage&amp;v=YaayAatdRNs";>using
 Spark to 
 provide real time video optimization</a></li>
     </ul>
   </li>
@@ -320,14 +320,14 @@ provide real time video optimization</a></li>
       <li>We create personalized experiences using Spark.</li>
     </ul>
   </li>
-  <li><a href="http://databricks.com";>Databricks</a>
+  <li><a href="https://databricks.com";>Databricks</a>
     <ul>
       <li>Formed by the creators of Apache Spark and Shark, Databricks is 
working to greatly expand these 
 open source projects and transform big data analysis in the process. 
We&#8217;re deeply committed to 
 keeping all work on these systems open source.</li>
       <li>We provided a hosted service to run Spark, 
 <a href="http://www.databricks.com/cloud";>Databricks Cloud</a>, and partner to 
-<a href="http://databricks.com/support/";>support Apache Spark</a> with other 
Hadoop and big 
+<a href="https://databricks.com/support/";>support Apache Spark</a> with other 
Hadoop and big 
 data companies.</li>
     </ul>
   </li>
@@ -521,11 +521,6 @@ analytics and data exploration at a low cost.</li>
     </ul>
   </li>
   <li><a href="http://www.trueffect.com";>TruEffect Inc</a></li>
-  <li><a href="http://www.tuplejump.com";>Tuplejump</a>
-    <ul>
-      <li>Software development partners for Apache Spark and Cassandra 
projects</li>
-    </ul>
-  </li>
   <li><a href="http://www.ucsc.edu";>UC Santa Cruz</a></li>
   <li><a href="http://missouri.edu/";>University of Missouri Data Analytics and 
Discover Lab</a></li>
   <li><a href="http://videoamp.com/";>VideoAmp</a>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/release-process.html
----------------------------------------------------------------------
diff --git a/site/release-process.html b/site/release-process.html
index 7782ab0..f8b441d 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -281,10 +281,10 @@ The recommended process is to ask the previous release 
manager to walk you throu
 
 <p>The release voting takes place on the Apache Spark developers list (the PMC 
is voting). 
 Look at past voting threads to see how this proceeds. The email should follow 
-<a 
href="http://mail-archives.apache.org/mod_mbox/spark-dev/201407.mbox/%3ccabpqxss7cf+yauuxck0jnush4207hcp4dkwn3bwfsvdnd86...@mail.gmail.com%3e";>this
 format</a>.</p>
+<a 
href="https://mail-archives.apache.org/mod_mbox/spark-dev/201407.mbox/%3ccabpqxss7cf+yauuxck0jnush4207hcp4dkwn3bwfsvdnd86...@mail.gmail.com%3e";>this
 format</a>.</p>
 
 <ul>
-  <li>Make a shortened link to the full list of JIRAs using <a 
href="http://s.apache.org/";>http://s.apache.org/</a></li>
+  <li>Make a shortened link to the full list of JIRAs using <a 
href="https://s.apache.org/";>https://s.apache.org/</a></li>
   <li>If possible, attach a draft of the release notes with the email</li>
   <li>Make sure the voting closing time is in UTC format. Use this script to 
generate it</li>
   <li>Make sure the email is in text format and the links are correct</li>
@@ -327,7 +327,7 @@ $ export SVN_EDITOR=vim
 $ svn mv https://dist.apache.org/repos/dist/dev/spark/spark-1.1.1-rc2 
https://dist.apache.org/repos/dist/release/spark/spark-1.1.1
 </code></pre>
 
-<p>Verify that the resources are present in <a 
href="http://www.apache.org/dist/spark/";>http://www.apache.org/dist/spark/</a>.
+<p>Verify that the resources are present in <a 
href="https://www.apache.org/dist/spark/";>https://www.apache.org/dist/spark/</a>.
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network. 
 There are a few remaining steps.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/robots.txt
----------------------------------------------------------------------
diff --git a/site/robots.txt b/site/robots.txt
index 0a73784..1ff1091 100644
--- a/site/robots.txt
+++ b/site/robots.txt
@@ -1 +1 @@
-Sitemap: http://spark.apache.org/sitemap.xml
+Sitemap: https://spark.apache.org/sitemap.xml


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to