Modified: websites/staging/climate/trunk/content/api/current/ocw/overview.html
==============================================================================
--- websites/staging/climate/trunk/content/api/current/ocw/overview.html 
(original)
+++ websites/staging/climate/trunk/content/api/current/ocw/overview.html Wed 
May  2 18:42:25 2018
@@ -1,23 +1,21 @@
+
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";>
 
-
 <html xmlns="http://www.w3.org/1999/xhtml";>
   <head>
     <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
-    
-    <title>Overview &#8212; Apache Open Climate Workbench 1.2.0 
documentation</title>
-    
+    <title>Overview &#8212; Apache Open Climate Workbench 1.3.0 
documentation</title>
     <link rel="stylesheet" href="../_static/alabaster.css" type="text/css" />
     <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
-    
     <script type="text/javascript">
       var DOCUMENTATION_OPTIONS = {
         URL_ROOT:    '../',
-        VERSION:     '1.2.0',
+        VERSION:     '1.3.0',
         COLLAPSE_INDEX: false,
         FILE_SUFFIX: '.html',
-        HAS_SOURCE:  true
+        HAS_SOURCE:  true,
+        SOURCELINK_SUFFIX: '.txt'
       };
     </script>
     <script type="text/javascript" src="../_static/jquery.js"></script>
@@ -25,7 +23,6 @@
     <script type="text/javascript" src="../_static/doctools.js"></script>
     <link rel="index" title="Index" href="../genindex.html" />
     <link rel="search" title="Search" href="../search.html" />
-    <link rel="top" title="Apache Open Climate Workbench 1.2.0 documentation" 
href="../index.html" />
     <link rel="next" title="Dataset Module" href="dataset.html" />
     <link rel="prev" title="Welcome to Apache Open Climate Workbench’s 
documentation!" href="../index.html" />
    
@@ -35,7 +32,7 @@
   <meta name="viewport" content="width=device-width, initial-scale=0.9, 
maximum-scale=0.9" />
 
   </head>
-  <body role="document">
+  <body>
   
 
     <div class="document">
@@ -59,7 +56,7 @@
 </div>
 <div class="section" id="data-sources">
 <h2>Data Sources<a class="headerlink" href="#data-sources" title="Permalink to 
this headline">¶</a></h2>
-<p>OCW data sources allow users to easily load <a class="reference internal" 
href="dataset.html#dataset.Dataset" title="dataset.Dataset"><code class="xref 
py py-class docutils literal"><span 
class="pre">dataset.Dataset</span></code></a> objects from a number of places. 
These data sources help with step 1 of an evaluation above. In general the 
primary file format that is supported is NetCDF. For instance, the <a 
class="reference internal" href="../data_source/data_sources.html#module-local" 
title="local"><code class="xref py py-mod docutils literal"><span 
class="pre">local</span></code></a>, <code class="xref py py-mod docutils 
literal"><span class="pre">dap</span></code> and <code class="xref py py-mod 
docutils literal"><span class="pre">esgf</span></code> data sources only 
support loading NetCDF files from your local machine, an OpenDAP URL, and the 
ESGF respectively. Some data sources, such as <a class="reference internal" 
href="../data_source/data_sources.html#module-rcmed" title
 ="rcmed"><code class="xref py py-mod docutils literal"><span 
class="pre">rcmed</span></code></a>, point to externally supported data 
sources. In the case of the RCMED data source, the Regional Climate Model 
Evaluation Database is run by NASA&#8217;s Jet Propulsion Laboratory.</p>
+<p>OCW data sources allow users to easily load <a class="reference internal" 
href="dataset.html#dataset.Dataset" title="dataset.Dataset"><code class="xref 
py py-class docutils literal"><span 
class="pre">dataset.Dataset</span></code></a> objects from a number of places. 
These data sources help with step 1 of an evaluation above. In general the 
primary file format that is supported is NetCDF. For instance, the <a 
class="reference internal" href="../data_source/data_sources.html#module-local" 
title="local"><code class="xref py py-mod docutils literal"><span 
class="pre">local</span></code></a>, <a class="reference internal" 
href="../data_source/data_sources.html#module-dap" title="dap"><code 
class="xref py py-mod docutils literal"><span class="pre">dap</span></code></a> 
and <a class="reference internal" 
href="../data_source/data_sources.html#module-esgf" title="esgf"><code 
class="xref py py-mod docutils literal"><span 
class="pre">esgf</span></code></a> data sources only support loading 
 NetCDF files from your local machine, an OpenDAP URL, and the ESGF 
respectively. Some data sources, such as <a class="reference internal" 
href="../data_source/data_sources.html#module-rcmed" title="rcmed"><code 
class="xref py py-mod docutils literal"><span 
class="pre">rcmed</span></code></a>, point to externally supported data 
sources. In the case of the RCMED data source, the Regional Climate Model 
Evaluation Database is run by NASA’s Jet Propulsion Laboratory.</p>
 <p>Adding additional data sources is quite simple. The only API limitation 
that we have on a data source is that it returns a valid <a class="reference 
internal" href="dataset.html#dataset.Dataset" title="dataset.Dataset"><code 
class="xref py py-class docutils literal"><span 
class="pre">dataset.Dataset</span></code></a> object. Please feel free to send 
patches for adding more data sources.</p>
 <p>A simple example using the <a class="reference internal" 
href="../data_source/data_sources.html#module-local" title="local"><code 
class="xref py py-mod docutils literal"><span 
class="pre">local</span></code></a> data source to load a NetCDF file from your 
local machine:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span 
class="nn">ocw.data_source.local</span> <span class="k">as</span> <span 
class="nn">local</span>
@@ -70,7 +67,7 @@
 <div class="section" id="dataset-manipulations">
 <h2>Dataset Manipulations<a class="headerlink" href="#dataset-manipulations" 
title="Permalink to this headline">¶</a></h2>
 <p>All <a class="reference internal" href="dataset.html#dataset.Dataset" 
title="dataset.Dataset"><code class="xref py py-class docutils literal"><span 
class="pre">dataset.Dataset</span></code></a> manipulations are handled by the 
<a class="reference internal" 
href="dataset_processor.html#module-dataset_processor" 
title="dataset_processor"><code class="xref py py-mod docutils literal"><span 
class="pre">dataset_processor</span></code></a> module. In general, an 
evaluation will include calls to <a class="reference internal" 
href="dataset_processor.html#dataset_processor.subset" 
title="dataset_processor.subset"><code class="xref py py-func docutils 
literal"><span class="pre">dataset_processor.subset()</span></code></a>, <a 
class="reference internal" 
href="dataset_processor.html#dataset_processor.spatial_regrid" 
title="dataset_processor.spatial_regrid"><code class="xref py py-func docutils 
literal"><span 
class="pre">dataset_processor.spatial_regrid()</span></code></a>, and <a 
class="refe
 rence internal" href="dataset_processor.html#dataset_processor.temporal_rebin" 
title="dataset_processor.temporal_rebin"><code class="xref py py-func docutils 
literal"><span class="pre">dataset_processor.temporal_rebin()</span></code></a> 
to ensure that the datasets can actually be compared. <a class="reference 
internal" href="dataset_processor.html#module-dataset_processor" 
title="dataset_processor"><code class="xref py py-mod docutils literal"><span 
class="pre">dataset_processor</span></code></a> functions take a <a 
class="reference internal" href="dataset.html#dataset.Dataset" 
title="dataset.Dataset"><code class="xref py py-class docutils literal"><span 
class="pre">dataset.Dataset</span></code></a> object and some various 
parameters and return a modified <a class="reference internal" 
href="dataset.html#dataset.Dataset" title="dataset.Dataset"><code class="xref 
py py-class docutils literal"><span 
class="pre">dataset.Dataset</span></code></a> object. The original dataset is 
never ma
 nipulated in the process.</p>
-<p>Subsetting is a great way to speed up your processing and keep useless data 
out of your plots. Notice that we&#8217;re using a <a class="reference 
internal" href="dataset.html#dataset.Bounds" title="dataset.Bounds"><code 
class="xref py py-class docutils literal"><span 
class="pre">dataset.Bounds</span></code></a> objec to represent the area of 
interest:</p>
+<p>Subsetting is a great way to speed up your processing and keep useless data 
out of your plots. Notice that we’re using a <a class="reference internal" 
href="dataset.html#dataset.Bounds" title="dataset.Bounds"><code class="xref py 
py-class docutils literal"><span class="pre">dataset.Bounds</span></code></a> 
objec to represent the area of interest:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span 
class="nn">ocw.dataset_processor</span> <span class="k">as</span> <span 
class="nn">dsp</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">new_bounds</span> <span 
class="o">=</span> <span class="n">Bounds</span><span class="p">(</span><span 
class="n">min_lat</span><span class="p">,</span> <span 
class="n">max_lat</span><span class="p">,</span> <span 
class="n">min_lon</span><span class="p">,</span> <span 
class="n">max_lon</span><span class="p">,</span> <span 
class="n">start_time</span><span class="p">,</span> <span 
class="n">end_time</span><span class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">knmi_dataset</span> <span 
class="o">=</span> <span class="n">dsp</span><span class="o">.</span><span 
class="n">subset</span><span class="p">(</span><span 
class="n">knmi_dataset</span><span class="p">,</span> <span 
class="n">new_bounds</span><span class="p">)</span>
@@ -80,7 +77,7 @@
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="n">knmi_dataset</span> <span 
class="o">=</span> <span class="n">dsp</span><span class="o">.</span><span 
class="n">temporal_rebin</span><span class="p">(</span><span 
class="n">knmi_dataset</span><span class="p">,</span> <span 
class="n">datetime</span><span class="o">.</span><span 
class="n">timedelta</span><span class="p">(</span><span 
class="n">days</span><span class="o">=</span><span class="mi">365</span><span 
class="p">))</span>
 </pre></div>
 </div>
-<p>It is critically necessary for our datasets to be on the same lat/lon grid 
before we try to compare them. That&#8217;s where spatial re-gridding comes in 
helpful. Here we re-grid our example dataset onto a 1-degree lat/lon grid 
within the range that we subsetted the dataset previously:</p>
+<p>It is critically necessary for our datasets to be on the same lat/lon grid 
before we try to compare them. That’s where spatial re-gridding comes in 
helpful. Here we re-grid our example dataset onto a 1-degree lat/lon grid 
within the range that we subsetted the dataset previously:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="n">new_lons</span> <span 
class="o">=</span> <span class="n">np</span><span class="o">.</span><span 
class="n">arange</span><span class="p">(</span><span 
class="n">min_lon</span><span class="p">,</span> <span 
class="n">max_lon</span><span class="p">,</span> <span class="mi">1</span><span 
class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">new_lats</span> <span 
class="o">=</span> <span class="n">np</span><span class="o">.</span><span 
class="n">arange</span><span class="p">(</span><span 
class="n">min_lat</span><span class="p">,</span> <span 
class="n">max_lat</span><span class="p">,</span> <span class="mi">1</span><span 
class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">knmi_dataset</span> <span 
class="o">=</span> <span class="n">dsp</span><span class="o">.</span><span 
class="n">spatial_regrid</span><span class="p">(</span><span 
class="n">knmi_dataset</span><span class="p">,</span> <span 
class="n">new_lats</span><span class="p">,</span> <span 
class="n">new_lons</span><span class="p">)</span>
@@ -89,15 +86,15 @@
 </div>
 <div class="section" id="metrics">
 <h2>Metrics<a class="headerlink" href="#metrics" title="Permalink to this 
headline">¶</a></h2>
-<p>Metrics are the backbone of an evaluation. You&#8217;ll find a number of 
(hopefully) useful &#8220;default&#8221; metrics in the <a class="reference 
internal" href="metrics.html#module-metrics" title="metrics"><code class="xref 
py py-mod docutils literal"><span class="pre">metrics</span></code></a> module 
in the toolkit. In general you won&#8217;t be too likely to use a metric 
outside of an evaluation, however you could run a metric manually if you so 
desired.:</p>
+<p>Metrics are the backbone of an evaluation. You’ll find a number of 
(hopefully) useful “default” metrics in the <a class="reference internal" 
href="metrics.html#module-metrics" title="metrics"><code class="xref py py-mod 
docutils literal"><span class="pre">metrics</span></code></a> module in the 
toolkit. In general you won’t be too likely to use a metric outside of an 
evaluation, however you could run a metric manually if you so desired.:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="kn">import</span> <span 
class="nn">ocw.metrics</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="c1"># Load 2 datasets</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">bias</span> <span 
class="o">=</span> <span class="n">ocw</span><span class="o">.</span><span 
class="n">metrics</span><span class="o">.</span><span 
class="n">Bias</span><span class="p">()</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="nb">print</span> <span 
class="n">bias</span><span class="o">.</span><span class="n">run</span><span 
class="p">(</span><span class="n">dataset1</span><span class="p">,</span> <span 
class="n">dataset2</span><span class="p">)</span>
 </pre></div>
 </div>
-<p>While this might be exactly what you need to get the job done, it is far 
more likely that you&#8217;ll need to run a number of metrics over a number of 
datasets. That&#8217;s where running an evaluation comes in, but we&#8217;ll 
get to that shortly.</p>
-<p>There are two &#8220;types&#8221; of metrics that the toolkit supports. A 
unary metric acts on a single dataset and returns a result. A binary metric 
acts on a target and reference dataset and returns a result. This is helpful to 
know if you decide that the included metrics aren&#8217;t sufficient. 
We&#8217;ve attempted to make adding a new metric as simple as possible. You 
simply create a new class that inherits from either the unary or binary base 
classes and override the <cite>run</cite> function. At this point your metric 
will behave exactly like the included metrics in the toolkit. Below is an 
example of how one of the included metrics is implemented. If you need further 
assistance with your own metrics be sure to email the project&#8217;s mailing 
list!:</p>
+<p>While this might be exactly what you need to get the job done, it is far 
more likely that you’ll need to run a number of metrics over a number of 
datasets. That’s where running an evaluation comes in, but we’ll get to 
that shortly.</p>
+<p>There are two “types” of metrics that the toolkit supports. A unary 
metric acts on a single dataset and returns a result. A binary metric acts on a 
target and reference dataset and returns a result. This is helpful to know if 
you decide that the included metrics aren’t sufficient. We’ve attempted to 
make adding a new metric as simple as possible. You simply create a new class 
that inherits from either the unary or binary base classes and override the 
<cite>run</cite> function. At this point your metric will behave exactly like 
the included metrics in the toolkit. Below is an example of how one of the 
included metrics is implemented. If you need further assistance with your own 
metrics be sure to email the project’s mailing list!:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="k">class</span> <span 
class="nc">Bias</span><span class="p">(</span><span 
class="n">BinaryMetric</span><span class="p">):</span>
 <span class="gp">&gt;&gt;&gt; </span>    <span 
class="sd">&#39;&#39;&#39;Calculate the bias between a reference and target 
dataset.&#39;&#39;&#39;</span>
 <span class="go">&gt;&gt;&gt;</span>
@@ -119,7 +116,7 @@
 <span class="gp">&gt;&gt;&gt; </span><span class="s1">        return 
ref_dataset.values - target_dataset.values</span>
 </pre></div>
 </div>
-<p>While this might look a bit scary at first, if we take out all the 
documentation you&#8217;ll see that it&#8217;s really extremely simple.:</p>
+<p>While this might look a bit scary at first, if we take out all the 
documentation you’ll see that it’s really extremely simple.:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="c1"># Our new Bias metric inherits 
from the Binary Metric base class</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="k">class</span> <span 
class="nc">Bias</span><span class="p">(</span><span 
class="n">BinaryMetric</span><span class="p">):</span>
 <span class="gp">&gt;&gt;&gt; </span>    <span class="c1"># Since our new 
metric is a binary metric we need to override</span>
@@ -130,7 +127,7 @@
 <span class="gp">&gt;&gt;&gt; </span>        <span class="k">return</span> 
<span class="n">ref_dataset</span><span class="o">.</span><span 
class="n">values</span> <span class="o">-</span> <span 
class="n">target_dataset</span><span class="o">.</span><span 
class="n">values</span>
 </pre></div>
 </div>
-<p>It is very important to note that you shouldn&#8217;t change the datasets 
that are passed into the metric that you&#8217;re implementing. If you do you 
might cause unexpected results in future parts of the evaluation. If you need 
to do manipulations, copy the data first and do manipulations on the copy. 
Leave the original dataset alone!</p>
+<p>It is very important to note that you shouldn’t change the datasets that 
are passed into the metric that you’re implementing. If you do you might 
cause unexpected results in future parts of the evaluation. If you need to do 
manipulations, copy the data first and do manipulations on the copy. Leave the 
original dataset alone!</p>
 </div>
 <div class="section" id="handling-an-evaluation">
 <h2>Handling an Evaluation<a class="headerlink" href="#handling-an-evaluation" 
title="Permalink to this headline">¶</a></h2>
@@ -162,12 +159,12 @@
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="n">new_eval</span> <span 
class="o">=</span> <span class="nb">eval</span><span class="o">.</span><span 
class="n">Evaluation</span><span class="p">(</span><span 
class="n">ref_dataset</span><span class="p">,</span> <span 
class="n">target_datasets</span><span class="p">,</span> <span 
class="n">metrics</span><span class="p">)</span>
 </pre></div>
 </div>
-<p>Notice two things about this. First, we&#8217;re splitting the datasets 
into a reference dataset (ref_dataset) and a list of target datasets 
(target_datasets). Second, one of the metrics that we loaded (<a 
class="reference internal" href="metrics.html#metrics.TemporalStdDev" 
title="metrics.TemporalStdDev"><code class="xref py py-class docutils 
literal"><span class="pre">metrics.TemporalStdDev</span></code></a>) is a unary 
metric. The reference/target dataset split is necessary to handling binary 
metrics. When an evaluation is run, all the binary metrics are run against 
every (reference, target) dataset pair. So the above evaluation could be 
replaced with the following calls. Of course this wouldn&#8217;t handle the 
unary metric, but we&#8217;ll get to that in a second.:</p>
+<p>Notice two things about this. First, we’re splitting the datasets into a 
reference dataset (ref_dataset) and a list of target datasets 
(target_datasets). Second, one of the metrics that we loaded (<a 
class="reference internal" href="metrics.html#metrics.TemporalStdDev" 
title="metrics.TemporalStdDev"><code class="xref py py-class docutils 
literal"><span class="pre">metrics.TemporalStdDev</span></code></a>) is a unary 
metric. The reference/target dataset split is necessary to handling binary 
metrics. When an evaluation is run, all the binary metrics are run against 
every (reference, target) dataset pair. So the above evaluation could be 
replaced with the following calls. Of course this wouldn’t handle the unary 
metric, but we’ll get to that in a second.:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="n">result1</span> <span 
class="o">=</span> <span class="n">bias</span><span class="o">.</span><span 
class="n">run</span><span class="p">(</span><span 
class="n">ref_dataset</span><span class="p">,</span> <span 
class="n">target1</span><span class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">result2</span> <span 
class="o">=</span> <span class="n">bias</span><span class="o">.</span><span 
class="n">run</span><span class="p">(</span><span 
class="n">ref_dataset</span><span class="p">,</span> <span 
class="n">target2</span><span class="p">)</span>
 </pre></div>
 </div>
-<p>Unary metrics are handled slightly differently but they&#8217;re still 
simple. Each unary metric passed into the evaluation is run against 
<em>every</em> dataset in the evaluation. So we could replace the above 
evaluation with the following calls:</p>
+<p>Unary metrics are handled slightly differently but they’re still simple. 
Each unary metric passed into the evaluation is run against <em>every</em> 
dataset in the evaluation. So we could replace the above evaluation with the 
following calls:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="n">unary_result1</span> <span 
class="o">=</span> <span class="n">tstd</span><span class="p">(</span><span 
class="n">ref_dataset</span><span class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">unary_result2</span> 
<span class="o">=</span> <span class="n">tstd</span><span 
class="p">(</span><span class="n">target1</span><span class="p">)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">unary_result3</span> 
<span class="o">=</span> <span class="n">tstd</span><span 
class="p">(</span><span class="n">target2</span><span class="p">)</span>
@@ -201,7 +198,7 @@
 </div>
 <div class="section" id="plotting">
 <h2>Plotting<a class="headerlink" href="#plotting" title="Permalink to this 
headline">¶</a></h2>
-<p>Plotting can be fairly complicated business. Luckily we have <a 
class="reference external" 
href="https://cwiki.apache.org/confluence/display/CLIMATE/Guide+to+Plotting+API";>pretty
 good documentation</a> on the project wiki that can help you out. There are 
also fairly simple examples in the project&#8217;s example folder with the 
remainder of the code such as the following:</p>
+<p>Plotting can be fairly complicated business. Luckily we have <a 
class="reference external" 
href="https://cwiki.apache.org/confluence/display/CLIMATE/Guide+to+Plotting+API";>pretty
 good documentation</a> on the project wiki that can help you out. There are 
also fairly simple examples in the project’s example folder with the 
remainder of the code such as the following:</p>
 <div class="highlight-default"><div class="highlight"><pre><span></span><span 
class="gp">&gt;&gt;&gt; </span><span class="c1"># Let&#39;s grab the values 
returned for bias.run(ref_dataset, target1)</span>
 <span class="gp">&gt;&gt;&gt; </span><span class="n">results</span> <span 
class="o">=</span> <span class="n">bias_evaluation</span><span 
class="o">.</span><span class="n">results</span><span class="p">[</span><span 
class="mi">0</span><span class="p">][</span><span class="mi">0</span><span 
class="p">]</span>
 <span class="go">&gt;&gt;&gt;</span>
@@ -242,7 +239,7 @@
 <h3>Related Topics</h3>
 <ul>
   <li><a href="../index.html">Documentation overview</a><ul>
-      <li>Previous: <a href="../index.html" title="previous chapter">Welcome 
to Apache Open Climate Workbench&#8217;s documentation!</a></li>
+      <li>Previous: <a href="../index.html" title="previous chapter">Welcome 
to Apache Open Climate Workbench’s documentation!</a></li>
       <li>Next: <a href="dataset.html" title="next chapter">Dataset 
Module</a></li>
   </ul></li>
 </ul>
@@ -250,7 +247,7 @@
   <div role="note" aria-label="source link">
     <h3>This Page</h3>
     <ul class="this-page-menu">
-      <li><a href="../_sources/ocw/overview.txt"
+      <li><a href="../_sources/ocw/overview.rst.txt"
             rel="nofollow">Show Source</a></li>
     </ul>
    </div>
@@ -269,14 +266,14 @@
       <div class="clearer"></div>
     </div>
     <div class="footer">
-      &copy;2016, Apache Software Foundation.
+      &copy;2017, Apache Software Foundation.
       
       |
-      Powered by <a href="http://sphinx-doc.org/";>Sphinx 1.4.8</a>
-      &amp; <a href="https://github.com/bitprophet/alabaster";>Alabaster 
0.7.9</a>
+      Powered by <a href="http://sphinx-doc.org/";>Sphinx 1.6.4</a>
+      &amp; <a href="https://github.com/bitprophet/alabaster";>Alabaster 
0.7.10</a>
       
       |
-      <a href="../_sources/ocw/overview.txt"
+      <a href="../_sources/ocw/overview.rst.txt"
           rel="nofollow">Page source</a>
     </div>
 


Reply via email to