Author: buildbot
Date: Thu Sep 26 23:08:24 2013
New Revision: 880028
Log:
Staging update by buildbot for trafficserver
Modified:
websites/staging/trafficserver/trunk/cgi-bin/ (props changed)
websites/staging/trafficserver/trunk/content/ (props changed)
websites/staging/trafficserver/trunk/content/docs/trunk/admin/http-proxy-caching/index.en.html
Propchange: websites/staging/trafficserver/trunk/cgi-bin/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Thu Sep 26 23:08:24 2013
@@ -1 +1 @@
-1526641
+1526740
Propchange: websites/staging/trafficserver/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Thu Sep 26 23:08:24 2013
@@ -1 +1 @@
-1526641
+1526740
Modified:
websites/staging/trafficserver/trunk/content/docs/trunk/admin/http-proxy-caching/index.en.html
==============================================================================
---
websites/staging/trafficserver/trunk/content/docs/trunk/admin/http-proxy-caching/index.en.html
(original)
+++
websites/staging/trafficserver/trunk/content/docs/trunk/admin/http-proxy-caching/index.en.html
Thu Sep 26 23:08:24 2013
@@ -704,38 +704,47 @@ server later. </p>
this can result in many near simultaneous requests to the origin server,
potentially overwhelming it or associated
resources. There are several features in Traffic Server that can be used to
avoid this scenario.</p>
<h2 id="ReadWhileWriter">Read While Writer</h2>
-<p>When Traffic Server goes to fetch something from origin, and upon receiving
the response, any number of clients can be allowed to start serving the
partially filled cache object once background_fill_completed_threshold % of the
object has been received. The difference is that Squid allows this as soon as
it goes to origin, whereas ATS can not do it until we get the complete response
header. The reason for this is that we make no distinction between cache
refresh, and cold cache, so we have no way to know if a response is going to be
cacheable, and therefore allow read-while-writer functionality.</p>
+<p>When Traffic Server goes to fetch something from origin, and upon receiving
the response, any number of clients can be allowed to start serving the
partially filled cache object once
<code>background_fill_completed_threshold</code> % of the object has been
received. The difference is that Squid allows this as soon as it goes to
origin, whereas ATS can not do it until we get the complete response header.
The reason for this is that we make no distinction between cache refresh, and
cold cache, so we have no way to know if a response is going to be cacheable,
and therefore allow read-while-writer functionality.</p>
<p>The configurations necessary to enable this are in <a
href="../configuration-files/records.config"><code>records.config</code></a>:</p>
-<p>CONFIG <a
href="../configuration-files/records.config#proxy.config.cache.enable_read_while_writer">proxy.config.cache.enable_read_while_writer</a>
INT 1
-CONFIG <a
href="../configuration-files/records.config#proxy.config.http.background_fill_active_timeou">proxy.config.http.background_fill_active_timeou</a>
INT 0
-CONFIG <a
href="../configuration-files/records.config#proxy.config.http.background_fill_completed_threshold">proxy.config.http.background_fill_completed_threshold</a>
FLOAT 0.000000
-CONFIG <a
href="../configuration-files/records.config#proxy.config.cache.max_doc_size">proxy.config.cache.max_doc_size</a>
INT 0
-All four configurations are required, for the following reasons:</p>
+<div class="codehilite"><pre><span class="n">CONFIG</span> <span
class="p">[</span><span class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span
class="n">cache</span><span class="p">.</span><span
class="n">enable_read_while_writer</span><span class="p">](.</span><span
class="o">./</span><span class="n">configuration</span><span
class="o">-</span><span class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">cache</span><span class="p">.</span><span
class="n">enable_read_while_writer</span><span class="p">)</span> <span
class="n">INT</span> 1
+<span class="n">CONFIG</span> <span class="p">[</span><span
class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span class="n">background_fill_active_timeou</span><span
class="p">](.</span><span class="o">./</span><span
class="n">configuration</span><span class="o">-</span><span
class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span
class="n">background_fill_active_timeou</span><span class="p">)</span> <span
class="n">INT</span> 0
+<span class="n">CONFIG</span> <span class="p">[</span><span
class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span
class="n">background_fill_completed_threshold</span><span
class="p">](.</span><span class="o">./</span><span
class="n">configuration</span><span class="o">-</span><span
class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span
class="n">background_fill_completed_threshold</span><span class="p">)</span>
<span class="n">FLOAT</span> 0<span class="p">.</span>000000
+<span class="n">CONFIG</span> <span class="p">[</span><span
class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span
class="n">cache</span><span class="p">.</span><span
class="n">max_doc_size</span><span class="p">](.</span><span
class="o">./</span><span class="n">configuration</span><span
class="o">-</span><span class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">cache</span><span class="p">.</span><span
class="n">max_doc_size</span><span class="p">)</span> <span
class="n">INT</span> 0
+</pre></div>
+
+
+<p>All four configurations are required, for the following reasons:</p>
<ul>
-<li>enable_read_while_writer turns the feature on. It's off (0) by default</li>
-<li>The background fill feature should be allowed to kick in for every
possible request. This is necessary, in case the writer ("first client
session") goes away, someone needs to take over the session. The original
client's request can go away after background_fill_active_timeout seconds, and
the object will continue fetching in the background. The object then can start
being served to another request after background_fill_completed_threshold % of
the object has been fetched from origin.</li>
-<li>The proxy.config.cache.max_doc_size should be unlimited (set to 0), since
the object size may be unknown, and going over this limit would cause a
disconnect on the objects being served.</li>
+<li><code>enable_read_while_writer</code> turns the feature on. It's off (0)
by default</li>
+<li>The background fill feature should be allowed to kick in for every
possible request. This is necessary, in case the writer ("first client
session") goes away, someone needs to take over the session. The original
client's request can go away after <code>background_fill_active_timeout</code>
seconds, and the object will continue fetching in the background. The object
then can start being served to another request after
background_fill_completed_threshold % of the object has been fetched from
origin.</li>
+<li>The <code>proxy.config.cache.max_doc_size</code> should be unlimited (set
to 0), since the object size may be unknown, and going over this limit would
cause a disconnect on the objects being served.</li>
</ul>
<p>Once all this enabled, you have something that is very close, but not quite
the same, as Squid's Collapsed Forwarding.</p>
<h2 id="FuzzyRevalidation">Fuzzy Revalidation</h2>
-<p>Traffic Server can be set to attempt to revalidate an object before it
becomes stale in cache. :file:<code>records.config</code>:: contains the
settings:</p>
-<p>CONFIG <a
href="../configuration-files/records.config#proxy.config.http.cache.fuzz.time">proxy.config.http.cache.fuzz.time</a>
INT 240
-CONFIG <a
href="../configuration-files/records.config#proxy.config.http.cache.fuzz.min_time">proxy.config.http.cache.fuzz.min_time</a>
INT 0
-CONFIG <a
href="../configuration-files/records.config#proxy.config.http.cache.fuzz.probability">proxy.config.http.cache.fuzz.probability</a>
FLOAT 0.005</p>
-<p>For every request for an object that occurs "fuzz.time" before (in the
example above, 240 seconds) the object is set to become stale, there is a small
-chance (fuzz.probability == 0.5%) that the request will trigger a revalidation
request to the origin. For objects getting a few requests per second, this
would likely not trigger, but then this feature is not necessary anyways since
odds are only 1 or a small number of connections would hit origin upon objects
going stale. The defaults are a good compromise, for objects getting roughly 4
requests / second or more, it's virtually guaranteed to trigger a revalidate
event within the 240s. These configs are also overridable per remap rule or via
a plugin, so can be adjusted per request if necessary. </p>
+<p>Traffic Server can be set to attempt to revalidate an object before it
becomes stale in cache. <a
href="../configuration-files/records.config"><code>records.config</code></a>
contains the settings:</p>
+<div class="codehilite"><pre><span class="n">CONFIG</span> <span
class="p">[</span><span class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span class="n">cache</span><span class="p">.</span><span
class="n">fuzz</span><span class="p">.</span><span class="n">time</span><span
class="p">](.</span><span class="o">./</span><span
class="n">configuration</span><span class="o">-</span><span
class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span class="n">cache</span><span
class="p">.</span><span class="n">fuzz</span><span class="p">.</span><span
class="n">time</span><span class="p">)</span> <span class="n">INT</span> 240
+<span class="n">CONFIG</span> <span class="p">[</span><span
class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span class="n">cache</span><span class="p">.</span><span
class="n">fuzz</span><span class="p">.</span><span
class="n">min_time</span><span class="p">](.</span><span
class="o">./</span><span class="n">configuration</span><span
class="o">-</span><span class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span class="n">cache</span><span
class="p">.</span><span class="n">fuzz</span><span class="p">.</span><span
class="n">min_time</span><span class="p">)</span> <span class="n">INT</span> 0
+<span class="n">CONFIG</span> <span class="p">[</span><span
class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span class="n">cache</span><span class="p">.</span><span
class="n">fuzz</span><span class="p">.</span><span
class="n">probability</span><span class="p">](.</span><span
class="o">./</span><span class="n">configuration</span><span
class="o">-</span><span class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span class="n">cache</span><span
class="p">.</span><span class="n">fuzz</span><span class="p">.</span><span
class="n">probability</span><span class="p">)</span> <span
class="n">FLOAT</span> 0<span class="p">.</span>005
+</pre></div>
+
+
+<p>For every request for an object that occurs <code>fuzz.time</code> before
(in the example above, 240 seconds) the object is set to become stale, there is
a small
+chance (<code>fuzz.probability</code> == 0.5%) that the request will trigger a
revalidation request to the origin. For objects getting a few requests per
second, this would likely not trigger, but then this feature is not necessary
anyways since odds are only 1 or a small number of connections would hit origin
upon objects going stale. The defaults are a good compromise, for objects
getting roughly 4 requests / second or more, it's virtually guaranteed to
trigger a revalidate event within the 240s. These configs are also overridable
per remap rule or via a plugin, so can be adjusted per request if necessary.
</p>
<p>Note that if the revalidation occurs, the requested object is no longer
available to be served from cache. Subsequent
requests for that object will be proxied to the origin. </p>
-<p>Finally, the fuzz.min_time is there to be able to handle requests with a
TTL less than fuzz.time â it allows for different times to evaluate the
probability of revalidation for small TTLs and big TTLs. Objects with small
TTLs will start "rolling the revalidation dice" near the fuzz.min_time, while
objects with large TTLs would start at fuzz.time. A logarithmic like function
between determines the revalidation evaluation start time (which will be
between fuzz.min_time and fuzz.time). As the object gets closer to expiring,
the window start becomes more likely. By default this setting is not enabled,
but should be enabled anytime you have objects with small TTLs. Note that this
option predates overridable configurations, so you can achieve something
similar with a plugin or remap.config conf_remap.so configs.</p>
+<p>Finally, the <code>fuzz.min_time</code> is there to be able to handle
requests with a TTL less than fuzz.time â it allows for different times to
evaluate the probability of revalidation for small TTLs and big TTLs. Objects
with small TTLs will start "rolling the revalidation dice" near the
fuzz.min_time, while objects with large TTLs would start at fuzz.time. A
logarithmic like function between determines the revalidation evaluation start
time (which will be between fuzz.min_time and fuzz.time). As the object gets
closer to expiring, the window start becomes more likely. By default this
setting is not enabled, but should be enabled anytime you have objects with
small TTLs. Note that this option predates overridable configurations, so you
can achieve something similar with a plugin or remap.config conf_remap.so
configs.</p>
<p>These configurations are similar to Squid's refresh_stale_hit configuration
option.</p>
<h2 id="OpenReadRetryTimeout">Open Read Retry Timeout</h2>
-<p>The open read retry configurations attempt to reduce the number of
concurrent requests to the origin for a given object. While an object is being
fetched from the origin server, subsequent requests would wait
open_read_retry_time milliseconds before checking if the object can be served
from cache. If the object is still being fetched, the subsequent requests will
retry max_open_read_retries times. Thus, subsequent requests may wait a total
of (max_open_read_retries x open_read_retry_time) milliseconds before
establishing an origin connection of its own. For instance, if they are set to
5 and 10 respectively, connections will wait up to 50ms for a response to come
back from origin from a previous request, until this request is allowed
through.</p>
+<p>The open read retry configurations attempt to reduce the number of
concurrent requests to the origin for a given object. While an object is being
fetched from the origin server, subsequent requests would wait
<code>open_read_retry_time</code> milliseconds before checking if the object
can be served from cache. If the object is still being fetched, the subsequent
requests will retry <code>max_open_read_retries</code> times. Thus, subsequent
requests may wait a total of (<code>max_open_read_retries</code> x
<code>open_read_retry_time</code>) milliseconds before establishing an origin
connection of its own. For instance, if they are set to 5 and 10 respectively,
connections will wait up to 50ms for a response to come back from origin from a
previous request, until this request is allowed through.</p>
<p>These settings are inappropriate when objects are uncacheable. In those
cases, requests for an object effectively become serialized. The subsequent
requests would await at least open_read_retry_time milliseconds before being
proxies to the origin.</p>
-<p>Similarly, this setting should be used in conjunction with Read While
Writer for big (those that take longer than (max_open_read_retries x
open_read_retry_time) milliseconds to transfer) cacheable objects. Without the
read-while-writer settings enabled, while the initial fetch is ongoing, not
only would subsequent requests be delayed by the maximum time, but also, those
requests would result in another request to the origin server.</p>
+<p>Similarly, this setting should be used in conjunction with Read While
Writer for big (those that take longer than (<code>max_open_read_retries</code>
x <code>open_read_retry_time</code>) milliseconds to transfer) cacheable
objects. Without the read-while-writer settings enabled, while the initial
fetch is ongoing, not only would subsequent requests be delayed by the maximum
time, but also, those requests would result in another request to the origin
server.</p>
<p>Since ATS now supports setting these settings per-request or remap rule,
you can configure this to be suitable for your setup much more easily.</p>
<p>The configurations are (with defaults):</p>
-<p>CONFIG <a
href="../configuration-files/records.config#proxy.config.http.cache.max_open_read_retries">proxy.config.http.cache.max_open_read_retries</a>
INT -1
-CONFIG <a
href="../configuration-files/records.config#proxy.config.http.cache.open_read_retry_time">proxy.config.http.cache.open_read_retry_time</a>
INT 10</p>
+<div class="codehilite"><pre><span class="n">CONFIG</span> <span
class="p">[</span><span class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span class="n">cache</span><span class="p">.</span><span
class="n">max_open_read_retries</span><span class="p">](.</span><span
class="o">./</span><span class="n">configuration</span><span
class="o">-</span><span class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span class="n">cache</span><span
class="p">.</span><span class="n">max_open_read_retries</span><span
class="p">)</span> <span class="n">INT</span> <span class="o">-</span>1
+<span class="n">CONFIG</span> <span class="p">[</span><span
class="n">proxy</span><span class="p">.</span><span
class="n">config</span><span class="p">.</span><span class="n">http</span><span
class="p">.</span><span class="n">cache</span><span class="p">.</span><span
class="n">open_read_retry_time</span><span class="p">](.</span><span
class="o">./</span><span class="n">configuration</span><span
class="o">-</span><span class="n">files</span><span class="o">/</span><span
class="n">records</span><span class="p">.</span><span
class="n">config</span>#<span class="n">proxy</span><span
class="p">.</span><span class="n">config</span><span class="p">.</span><span
class="n">http</span><span class="p">.</span><span class="n">cache</span><span
class="p">.</span><span class="n">open_read_retry_time</span><span
class="p">)</span> <span class="n">INT</span> 10
+</pre></div>
+
+
<p>The default means that the feature is disabled, and every connection is
allowed to go to origin instantly. When enabled, you will try
max_open_read_retries times, each with a open_read_retry_time timeout.</p>
</div>
</div>