Repository: trafficserver
Updated Branches:
  refs/heads/master 1f1e2ae15 -> aa37d0ab5


http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/cache/cache-data-structures.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/cache/cache-data-structures.en.rst 
b/doc/arch/cache/cache-data-structures.en.rst
index 508c4da..1158051 100644
--- a/doc/arch/cache/cache-data-structures.en.rst
+++ b/doc/arch/cache/cache-data-structures.en.rst
@@ -1,28 +1,31 @@
 .. Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+.. _cache-data-structures:
 
 Cache Data Structures
-******************************
+*********************
 
 .. include:: common.defs
 
 .. cpp:class:: OpenDir
 
-   An open directory entry. It contains all the information of a 
:cpp:class:`Dir` plus additional information from the first :cpp:class:`Doc`.
+   An open directory entry. It contains all the information of a
+   :cpp:class:`Dir` plus additional information from the first 
:cpp:class:`Doc`.
 
 .. cpp:class:: CacheVC
 
@@ -30,15 +33,20 @@ Cache Data Structures
 
 .. cpp:function:: int CacheVC::openReadStartHead(int event, Event* e)
 
-   Do the initial read for a cached object.
+   Performs the initial read for a cached object.
 
 .. cpp:function:: int CacheVC::openReadStartEarliest(int event, Event* e)
 
-   Do the initial read for an alternate of an object.
+   Performs the initial read for an :term:`alternate` of an object.
 
 .. cpp:class:: HttpTunnel
 
-   Data transfer driver. This contains a set of *producers*. Each producer is 
connected to one or more *consumers*. The tunnel handles events and buffers so 
that data moves from producers to consumers. The data, as much as possible, is 
kept in reference counted buffers so that copies are done only when the data is 
modified or for sources (which acquire data from outside |TS|) and sinks (which 
move data to outside |TS|).
+   Data transfer driver. This contains a set of *producers*. Each producer is
+   connected to one or more *consumers*. The tunnel handles events and buffers
+   so that data moves from producers to consumers. The data, as much as
+   possible, is kept in reference counted buffers so that copies are done only
+   when the data is modified or for sources (which acquire data from outside
+   |TS|) and sinks (which move data to outside |TS|).
 
 .. cpp:class:: CacheControlResult
 
@@ -46,13 +54,17 @@ Cache Data Structures
 
 .. cpp:class:: CacheHTTPInfoVector
 
-   Defined in |P-CacheHttp.h|_. This is an array of :cpp:class:`HTTPInfo` 
objects and serves as the respository of information about alternates of an 
object. It is marshaled as part of the metadata for an object in the cache.
+   Defined in |P-CacheHttp.h|_. This is an array of :cpp:class:`HTTPInfo`
+   objects and serves as the respository of information about alternates of an
+   object. It is marshaled as part of the metadata for an object in the cache.
 
 .. cpp:class:: HTTPInfo
 
    Defined in |HTTP.h|_.
 
-   This class is a wrapper for :cpp:class:`HTTPCacheAlt`. It provides the 
external API for accessing data in the wrapped class. It contains only a 
pointer (possibly ``NULL``) to an instance of the wrapped class.
+   This class is a wrapper for :cpp:class:`HTTPCacheAlt`. It provides the
+   external API for accessing data in the wrapped class. It contains only a
+   pointer (possibly ``NULL``) to an instance of the wrapped class.
 
 .. cpp:class:: CacheHTTPInfo
 
@@ -62,12 +74,16 @@ Cache Data Structures
 
    Defined in |HTTP.h|_.
 
-   This is the metadata for a single alternate for a cached object. In 
contains among other data
+   This is the metadata for a single :term:`alternate` for a cached object. It
+   contains, among other data, the following:
 
    * The key for the earliest ``Doc`` of the alternate.
+
    * The request and response headers.
-   * The fragment offset table. [#]_
-   * Timestamps for request and response from origin server.
+
+   * The fragment offset table.[#fragment-offset-table]_
+
+   * Timestamps for request and response from :term:`origin server`.
 
 .. cpp:class:: EvacuationBlock
 
@@ -75,19 +91,25 @@ Cache Data Structures
 
 .. cpp:class:: Vol
 
-   This represents a storage unit inside a cache volume.
+   This represents a :term:`storage unit` inside a :term:`cache volume`.
 
    .. cpp:member:: off_t Vol::segments
 
-      The number of segments in the volume. This will be roughly the total 
number of entries divided by the number of entries in a segment. It will be 
rounded up to cover all entries.
+      The number of segments in the volume. This will be roughly the total
+      number of entries divided by the number of entries in a segment. It will
+      be rounded up to cover all entries.
 
    .. cpp:member:: off_t Vol::buckets
 
-      The number of buckets in the volume. This will be roughly the number of 
entries in a segment divided by ``DIR_DEPTH``. For currently defined values 
this is around 16,384 (2^16 / 4). Buckets are used as the targets of the index 
hash.
+      The number of buckets in the volume. This will be roughly the number of
+      entries in a segment divided by ``DIR_DEPTH``. For currently defined
+      values this is around 16,384 (2^16 / 4). Buckets are used as the targets
+      of the index hash.
 
    .. cpp:member:: DLL\<EvacuationBlock\> Vol::evacuate
 
-      Array of of :cpp:class:`EvacuationBlock` buckets. This is sized so there 
is one bucket for every evacuation span.
+      Array of of :cpp:class:`EvacuationBlock` buckets. This is sized so there
+      is one bucket for every evacuation span.
 
    .. cpp:member:: off_t len
 
@@ -95,11 +117,13 @@ Cache Data Structures
 
 .. cpp:function:: int Vol::evac_range(off_t low, off_t high, int evac_phase)
 
-   Start an evacuation if there is any :cpp:class:`EvacuationBlock` in the 
range from *low* to *high*. Return 0 if no evacuation was started, non-zero 
otherwise.
+   Start an evacuation if there is any :cpp:class:`EvacuationBlock` in the 
range
+   from :arg:`low` to :arg:`high`. Return ``0`` if no evacuation was started,
+   non-zero otherwise.
 
 .. cpp:class:: CacheVol
 
-   A cache volume as described in :file:`volume.config`.
+   A :term:`cache volume` as described in :file:`volume.config`.
 
 .. cpp:class:: Doc
 
@@ -111,33 +135,48 @@ Cache Data Structures
 
    .. cpp:member:: uint32_t Doc::len
 
-      The length of this segment including the header length, fragment table, 
and this structure.
+      The length of this segment including the header length, fragment table,
+      and this structure.
 
    .. cpp:member:: uint64_t Doc::total_len
 
-      Total length of the entire document not including meta data but 
including headers.
+      Total length of the entire document not including meta data but including
+      headers.
 
    .. cpp:member:: INK_MD5 Doc::first_key
 
-      First index key in the document (the index key used to locate this 
object in the volume index).
+      First index key in the document (the index key used to locate this object
+      in the volume index).
 
    .. cpp:member:: INK_MD5 Doc::key
 
-      The index key for this fragment. Fragment keys are computationally 
chained so that the key for the next and previous fragments can be computed 
from this key.
+      The index key for this fragment. Fragment keys are computationally
+      chained so that the key for the next and previous fragments can be
+      computed from this key.
 
    .. cpp:member:: uint32_t Doc::hlen
 
-      Document header (metadata) length. This is not the length of the HTTP 
headers.
+      Document header (metadata) length. This is not the length of the HTTP
+      headers.
 
    .. cpp:member:: uint8_t Doc::ftype
 
-      Fragment type. Currently only `CACHE_FRAG_TYPE_HTTP` is used. Other 
types may be used for cache extensions if those are ever used / implemented.
+      Fragment type. Currently only ``CACHE_FRAG_TYPE_HTTP`` is used. Other
+      types may be used for cache extensions if those are ever implemented.
 
    .. cpp:member:: uint24_t Doc::flen
 
-      Fragment table length, if any. Only the first ``Doc`` in an object 
should contain a fragment table.
+      Fragment table length, if any. Only the first ``Doc`` in an object should
+      contain a fragment table.
 
-      The fragment table is a list of offsets relative to the HTTP content 
(not counting metadata or HTTP headers). Each offset is the byte offset of the 
first byte in the fragment. The first element in the table is the second 
fragment (what would be index 1 for an array). The offset for the first 
fragment is of course always zero and so not stored. The purpose of this is to 
enable a fast seek for range requests - given the first ``Doc`` the fragment 
containing the first byte in the range can be computed and loaded directly 
without further disk access.
+      The fragment table is a list of offsets relative to the HTTP content (not
+      counting metadata or HTTP headers). Each offset is the byte offset of the
+      first byte in the fragment. The first element in the table is the second
+      fragment (what would be index 1 for an array). The offset for the first
+      fragment is of course always zero and so not stored. The purpose of this
+      is to enable a fast seek for range requests. Given the first ``Doc`` the
+      fragment containing the first byte in the range can be computed and 
loaded
+      directly without further disk access.
 
       Removed as of version 3.3.0.
 
@@ -155,10 +194,14 @@ Cache Data Structures
 
    .. cpp:member:: uint32_t checksum
 
-      Unknown. (A checksum of some sort)
+      Unknown.
 
 .. cpp:class:: VolHeaderFooter
 
 .. rubric:: Footnotes
 
-.. [#] Changed in version 3.2.0. This previously resided in the first ``Doc`` 
but that caused different alternates to share the same fragment table.
+.. [#fragment-offset-table]
+
+   Changed in version 3.2.0. This previously resided in the first ``Doc`` but
+   that caused different alternates to share the same fragment table.
+

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/cache/cache.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/cache/cache.en.rst b/doc/arch/cache/cache.en.rst
index f103d9f..4e50649 100644
--- a/doc/arch/cache/cache.en.rst
+++ b/doc/arch/cache/cache.en.rst
@@ -1,19 +1,19 @@
 .. Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
- 
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
    http://www.apache.org/licenses/LICENSE-2.0
- 
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
 
 Apache Traffic Server Cache
 ***************************
@@ -30,4 +30,4 @@ Contents:
    tier-storage.en
    ram-cache.en
 
-..   appendix
+.. appendix

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/cache/ram-cache.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/cache/ram-cache.en.rst b/doc/arch/cache/ram-cache.en.rst
index b0b15e1..e7174cf 100644
--- a/doc/arch/cache/ram-cache.en.rst
+++ b/doc/arch/cache/ram-cache.en.rst
@@ -5,9 +5,9 @@
    to you under the Apache License, Version 2.0 (the
    "License"); you may not use this file except in compliance
    with the License.  You may obtain a copy of the License at
-   
+
    http://www.apache.org/licenses/LICENSE-2.0
-   
+
    Unless required by applicable law or agreed to in writing,
    software distributed under the License is distributed on an
    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
@@ -21,68 +21,149 @@
 Ram Cache
 *********
 
-New Ram Cache Algorithm (CLFUS)
+New RAM Cache Algorithm (CLFUS)
 ===============================
 
-The new Ram Cache uses ideas from a number of cache replacement policies and 
algorithms, including LRU, LFU, CLOCK, GDFS and 2Q, called CLFUS (Clocked Least 
Frequently Used by Size). It avoids any patented algorithms and includes the 
following features:
+The new RAM Cache uses ideas from a number of cache replacement policies and
+algorithms, including LRU, LFU, CLOCK, GDFS and 2Q, called CLFUS (Clocked Least
+Frequently Used by Size). It avoids any patented algorithms and includes the
+following features:
 
-* Balances Recentness, Frequency and Size to maximize hit rate (not byte hit 
rate).
-* Is Scan Resistant and extracts robust hit rates even when the working set 
does not fit in the Ram Cache.
-* Supports compression at 3 levels fastlz, gzip(libz), and xz(liblzma).  
Compression can be moved to another thread.
-* Has very low CPU overhead, only little more than a basic LRU.  Rather than 
using an O(lg n) heap, it uses a probabilistic replacement policy for O(1) cost 
with low C.
-* Has relatively low memory overhead of approximately 200 bytes per object in 
memory.
+* Balances Recentness, Frequency and Size to maximize hit rate (not byte hit
+  rate).
 
-The rational for emphasizing hit rate over byte hit rate is that the overhead 
of pulling more bytes from secondary storage is low compared to the cost of a 
request.
+* Is Scan Resistant and extracts robust hit rates even when the working set 
does
+  not fit in the RAM Cache.
 
-The Ram Cache consists of an object hash fronting 2 LRU/CLOCK lists and a 
"Seen" hash table.  The first "Cached" list contains objects in memory while 
the second contains a "History" of objects which have either recently been in 
memory or are being considered for keeping in memory.  The "Seen" hash table is 
used to make the algorithm scan resistant.
+* Supports compression at 3 levels: fastlz, gzip (libz), and xz (liblzma).
+  Compression can be moved to another thread.
 
-The list entries record the following information:
+* Has very low CPU overhead, only slightly more than a basic LRU. Rather than
+  using an O(lg n) heap, it uses a probabilistic replacement policy for O(1)
+  cost with low C.
 
-* key - 16 byte unique object identifier
-* auxkeys - 8 bytes worth of version number (in our system the block in the 
partition).  When the version of an object changes old entries are purged from 
the cache.
-* hits - number of hits within this clock period
-* size - the size of the object in the cache
-* len - the actual length of the object (differs from size because of 
compression and padding)
-* compressed_len - the compressed length of the object
-* compressed (none, fastlz, libz, liblzma)
-* uncompressible (flag)
-* copy - whether or not this object should be copied in and copied out (e.g. 
HTTP HDR)
-* LRU link
-* HASH link
-* IOBufferData (smart point to the data buffer)
+* Has relatively low memory overhead of approximately 200 bytes per object in
+  memory.
 
+The rationale for emphasizing hit rate over byte hit rate is that the overhead
+of pulling more bytes from secondary storage is low compared to the cost of a
+request.
+
+The RAM Cache consists of an object hash fronting 2 LRU/CLOCK lists and a 
*seen*
+hash table. The first cached list contains objects in memory, while the second
+contains a history of objects which have either recently been in memory or are
+being considered for keeping in memory. The *seen* hash table is used to make
+the algorithm scan resistant.
+
+The list entries record the following information:
 
-The interface to the cache is Get and Put operations.  Get operations check if 
an object is in the cache and are called on a read attempt.  The Put operation 
decides whether or not to cache the provided object in memory.  It is called 
after a read from secondary storage.
+============== ================================================================
+Value          Description
+============== ================================================================
+key            16 byte unique object identifier
+auxkeys        8 bytes worth of version number (in our system, the block in the
+               partition). When the version of an object changes old entries 
are
+               purged from the cache.
+hits           Number of hits within this clock period.
+size           size of the object in the cache.
+len            Length of the object, which differs from *size* because of
+               compression and padding).
+compressed_len Compressed length of the object.
+compressed     Compression type, or ``none`` if no compression. Possible types
+               are: *fastlz*, *libz*, and *liblzma*.
+uncompressible Flag indicating that content cannot be compressed (true), or 
that
+               it mat be compressed (false).
+copy           Whether or not this object should be copied in and copied out
+               (e.g. HTTP HDR).
+LRU link
+HASH link
+IOBufferData   Smart point to the data buffer.
+============== ================================================================
+
+The interface to the cache is *Get* and *Put* operations. Get operations check
+if an object is in the cache and are called on a read attempt. The Put 
operation
+decides whether or not to cache the provided object in memory. It is called
+after a read from secondary storage.
 
 Seen Hash
 =========
 
-The Seen List becomes active after the Cached and History lists become full 
after a cold start.  The purpose is to make the cache scan resistant which 
means that the cache state must not be effected at all by a long sequence Get 
and Put operations on objects which are seen only once.  This is essential, 
without it not only would the cache be polluted, but it could lose critical 
information about the objects that it cares about.  It is therefore essential 
that the Cache and History lists are not effected by Get or Put operations on 
objects seen the first time.  The Seen Hash maintains a set of 16 bit hash 
tags, and requests which do not hit in the object cache (are in the Cache List 
or History List) and do not match the hash tag result in the hash tag begin 
updated but are otherwise ignored. The Seen Hash is sized to approximately the 
number of objects in the cache in order to match the number that are passed 
through it with the CLOCK rate of the Cached and History Lists. 
+The *Seen List* becomes active after the *Cached* and *History* lists become
+full following a cold start. The purpose is to make the cache scan resistant,
+which means that the cache state must not be affected at all by a long sequence
+Get and Put operations on objects which are seen only once. This is essential,
+and without it not only would the cache be polluted, but it could lose critical
+information about the objects that it cares about. It is therefore essential
+that the Cache and History lists are not affected by Get or Put operations on
+objects seen the first time. The Seen Hash maintains a set of 16 bit hash tags,
+and requests which do not hit in the object cache (are in the Cache List or
+History List) and do not match the hash tag result in the hash tag being 
updated
+but are otherwise ignored. The Seen Hash is sized to approximately the number 
of
+objects in the cache in order to match the number that are passed through it
+with the CLOCK rate of the Cached and History Lists.
 
 Cached List
 ===========
 
-The Cached list contains objects actually in memory.  The basic operation is 
LRU with new entries inserted into a FIFO (queue) and hits causing objects to 
be reinserted.  The interesting bit comes when an object is being considered 
for insertion.  First we check if the Object Hash to see if the object is in 
the Cached List or History.  Hits result in updating the "hit" field and 
reinsertion.  History hits result in the "hit" field being updated and a 
comparison to see if this object should be kept in memory.  The comparison is 
against the least recently used members of the Cache List, and is based on a 
weighted frequency::
+The *Cached List* contains objects actually in memory. The basic operation is
+LRU with new entries inserted into a FIFO queue and hits causing objects to be
+reinserted. The interesting bit comes when an object is being considered for
+insertion. A check is first made against the Object Hash to see if the object
+is in the Cached List or History. Hits result in updating the ``hit`` field and
+reinsertion of the object. History hits result in the ``hit`` field being
+updated and a comparison to see if this object should be kept in memory. The
+comparison is against the least recently used members of the Cache List, and
+is based on a weighted frequency::
 
    CACHE_VALUE = hits / (size + overhead)
 
-A new object must beat enough bytes worth of currently cached objects to cover 
itself.  Each time an object is considered for replacement the CLOCK moves 
forward.  If the History object has a greater value then it is inserted into 
the Cached List and the replaced objects are removed from memory and their list 
entries are inserted into the History List.  If the History object has a lesser 
value it is reinserted into the History List.  Objects considered for 
replacement (at least one) but not replaced have their "hits" field set to zero 
and are reinserted into the Cached List.  This is the CLOCK operation on the 
Cached List.
+A new object must be enough bytes worth of currently cached objects to cover
+itself. Each time an object is considered for replacement the CLOCK moves
+forward. If the History object has a greater value then it is inserted into the
+Cached List and the replaced objects are removed from memory and their list
+entries are inserted into the History List. If the History object has a lesser
+value it is reinserted into the History List. Objects considered for 
replacement
+(at least one) but not replaced have their ``hits`` field set to ``0`` and are
+reinserted into the Cached List. This is the CLOCK operation on the Cached 
List.
 
 History List
 ============
 
-Each CLOCK the least recently used entry in the History List is dequeued and 
if the "hits" field is not greater than 1 (it was hit at least once in the 
History or Cached List) it is deleted, otherwise the "hits" is set to zero and 
it is requeued on the History List. 
-
-Compression/Decompression
-=========================
-
-Compression is performed by a background operation (currently called as part 
of Put) which maintains a pointer into the Cached List and runs toward the head 
compressing entries.  Decompression occurs on demand during a Get.  In the case 
of objects tagged "copy" the compressed version is reinserted in the LRU since 
we need to make a copy anyway.  Those not tagged "copy" are inserted 
uncompressed in the hope that they can be reused in uncompressed form.  This is 
a compile time option and may be something we want to change.
-
-There are 3 algorithms and levels of compression (speed on 1 thread i7 920) :
-
-* fastlz: 173 MB/sec compression, 442 MB/sec decompression : basically free 
since disk or network will limit first, ~53% final size
-* libz: 55 MB/sec compression, 234 MB/sec decompression : almost free, 
particularly decompression, ~37% final size
-* liblzma: 3 MB/sec compression, 50 MB/sec decompression : expensive, ~27% 
final size
-
-These are ballpark numbers, and your millage will vary enormously.  JPEG for 
example will not compress with any of these. The RamCache does detect 
compression level and will declare something "incompressible" if it doesn't get 
below 90% of the original size. This value is cached so that the RamCache will 
not attempt to compress it again (at least as long as it is in the history).
+Each CLOCK, the least recently used entry in the History List is dequeued and
+if the ``hits`` field is not greater than ``1`` (it was hit at least once in
+the History or Cached List) it is deleted. Otherwise, the ``hits`` is set to
+``0`` and it is requeued on the History List.
+
+Compression and Decompression
+=============================
+
+Compression is performed by a background operation (currently called as part of
+Put) which maintains a pointer into the Cached List and runs toward the head
+compressing entries. Decompression occurs on demand during a Get. In the case
+of objects tagged ``copy``, the compressed version is reinserted in the LRU
+since we need to make a copy anyway. Those not tagged ``copy`` are inserted
+uncompressed in the hope that they can be reused in uncompressed form. This is
+a compile time option and may be something we want to change.
+
+There are 3 algorithms and levels of compression (speed on an Intel i7 920
+series processor using one thread):
+
+======= ================ ================== 
====================================
+Method  Compression Rate Decompression Rate Notes
+======= ================ ================== 
====================================
+fastlz  173 MB/sec       442 MB/sec         Basically free since disk or 
network
+                                            will limit first; ~53% final size.
+libz    55 MB/sec        234 MB/sec         Almost free, particularly
+                                            decompression; ~37% final size.
+liblzma 3 MB/sec         50 MB/sec          Expensive; ~27% final size.
+======= ================ ================== 
====================================
+
+These are ballpark numbers, and your millage will vary enormously. JPEG, for
+example, will not compress with any of these (or at least will only do so at
+such a marginal level that the cost of compression and decompression is wholly
+unjustified), and the same is true of many other media and binary file types
+which embed some form of compression. The RAM Cache does detect compression
+level and will declare something *incompressible* if it doesn't get below 90% 
of
+the original size. This value is cached so that the RAM Cache will not attempt
+to compress it again (at least as long as it is in the history).
 

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/cache/tier-storage.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/cache/tier-storage.en.rst 
b/doc/arch/cache/tier-storage.en.rst
index 933b02b..88c147d 100644
--- a/doc/arch/cache/tier-storage.en.rst
+++ b/doc/arch/cache/tier-storage.en.rst
@@ -15,104 +15,129 @@
    specific language governing permissions and limitations
    under the License.
 
-==============================
 Tiered Storage Design
 ==============================
 
 .. include:: common.defs
 
---------------
 Introduction
 --------------
 
-Tiered storage is an attempt to allow |TS| to take advantage of physical 
storage with different properties. This design
-concerns only mechanism. Policies to take advantage of these are outside of 
the scope of this document. Instead we will
-presume an *oracle* which implements this policy and describe the queries that 
must be answered by the oracle and the
-effects of the answers.
-
-Beyond avoiding question of tier policy the design is also intended to be 
effectively identical to current operations
-for the case where there is only one tier.
-
-The most common case for tiers is an ordered list of tiers, where higher tiers 
are presumed faster but more expensive
-(or more limited in capacity). This is not required. It might be that 
different tiers are differentiated by other
-properties (such as expected persistence). The design here is intended to 
handle both cases.
-
-The design presumes that if a user has multiple tiers of storage and an 
ordering for those tiers, they will usually want
-content stored at one tier level to also be stored at every other lower level 
as well, so that it does not have to be
+Tiered storage is an attempt to allow |TS| to take advantage of physical 
storage
+with different properties. This design concerns only mechanism. Policies to 
take
+advantage of these are outside of the scope of this document. Instead we will
+presume an *oracle* which implements this policy and describe the queries that
+must be answered by the oracle and the effects of the answers.
+
+Beyond avoiding question of tier policy, the design is also intended to be
+effectively identical to current operations for the case where there is only
+one tier.
+
+The most common case for tiers is an ordered list of tiers, where higher tiers
+are presumed faster but more expensive (or more limited in capacity). This is
+not required. It might be that different tiers are differentiated by other
+properties (such as expected persistence). The design here is intended to
+handle both cases.
+
+The design presumes that if a user has multiple tiers of storage and an 
ordering
+for those tiers, they will usually want content stored at one tier level to 
also
+be stored at every other lower level as well, so that it does not have to be
 copied if evicted from a higher tier.
 
--------------
 Configuration
 -------------
 
-Each storage unit in :file:`storage.config` can be marked with a *quality* 
value which is 32 bit number. Storage units
-that are not marked are all assigned the same value which is guaranteed to be 
distinct from all explicit values. The
-quality value is arbitrary from the point of view of this design, serving as a 
tag rather than a numeric value. The user
-(via the oracle) can impose what ever additional meaning is useful on this 
value (rating, bit slicing, etc.). In such
-cases all volumes should be explicitly assigned a value, as the default / 
unmarked value is not guaranteed to have any
-relationship to explicit values. The unmarked value is intended to be useful 
in situations where the user has no
-interest in tiered storage and so wants to let Traffic Server automatically 
handle all volumes as a single tier.
+Each :term:`storage unit` in :file:`storage.config` can be marked with a
+*quality* value which is 32 bit number. Storage units that are not marked are
+all assigned the same value which is guaranteed to be distinct from all 
explicit
+values. The quality value is arbitrary from the point of view of this design,
+serving as a tag rather than a numeric value. The user (via the oracle) can
+impose what ever additional meaning is useful on this value (rating, bit
+slicing, etc.).
+
+In such cases, all :term:`volumes <cache volume>` should be explicitly assigned
+a value, as the default (unmarked) value is not guaranteed to have any
+relationship to explicit values. The unmarked value is intended to be useful in
+situations where the user has no interest in tiered storage and so wants to let
+|TS| automatically handle all volumes as a single tier.
 
--------------
 Operations
--------------
+----------
 
-After a client request is received and processed, volume assignment is done. 
This would be changed to do volume assignment across all tiers simultaneously. 
For each tier the oracle would return one of four values along with a volume 
pointer.
+After a client request is received and processed, volume assignment is done. 
For
+each tier, the oracle would return one of four values along with a volume
+pointer:
 
-`READ`
+``READ``
     The tier appears to have the object and can serve it.
 
-`WRITE`
-    The object is not in this tier and should be written to this tier if 
possible.
+``WRITE``
+    The object is not in this tier and should be written to this tier if
+    possible.
 
-`RW`
-    Treat as `READ` if possible but if the object turns out to not in the 
cache treat as `WRITE`.
+``RW``
+    Treat as ``READ`` if possible, but if the object turns out to not in the
+    cache treat as ``WRITE``.
 
-`NO_SALE`
+``NO_SALE``
     Do not interact with this tier for this object.
 
-The volume returned for the tier must be a volume with the corresponding tier 
quality value. In effect the current style
-of volume assignment is done for each tier, by assigning one volume out of all 
of the volumes of the same quality and
-returning one of `RW` or `WRITE` depending on whether the initial volume 
directory lookup succeeds. Note that as with
-current volume assignment it is presumed this can be done from in memory 
structures (no disk I/O required).
+The :term:`volume <cache volume>` returned for the tier must be a volume with
+the corresponding tier quality value. In effect, the current style of volume
+assignment is done for each tier, by assigning one volume out of all of the
+volumes of the same quality and returning one of ``RW`` or ``WRITE``, depending
+on whether the initial volume directory lookup succeeds. Note that as with
+current volume assignment, it is presumed this can be done from in memory
+structures (no disk I/O required).
+
+If the oracle returns ``READ`` or ``RW`` for more than one tier, it must also
+return an ordering for those tiers (it may return an ordering for all tiers,
+ones that are not readable will be ignored). For each tier, in that order, a
+read of cache storage is attempted for the object. A successful read locks that
+tier as the provider of cached content. If no tier has a successful read, or no
+tier is marked ``READ`` or ``RW`` then it is a cache miss. Any tier marked
+``RW`` that fails the read test is demoted to ``WRITE``.
+
+If the object is cached, every tier that returns ``WRITE`` receives the object
+to store in the selected volume (this includes ``RW`` returns that are demoted
+to ``WRITE``). This is a cache to cache copy, not from the :term:`origin 
server`.
+In this case, tiers marked ``RW`` that are not tested for read will not receive
+any data and will not be further involved in the request processing.
+
+For a cache miss, all tiers marked ``WRITE`` will receive data from the origin
+server connection (if successful).
+
+This means, among other things, that if there is a tier with the object all
+other tiers that are written will get a local copy of the object, and the 
origin
+server will not be used. In terms of implementation, currently a cache write to
+a volume is done via the construction of an instance of :cpp:class:`CacheVC`
+which recieves the object stream. For tiered storage, the same thing is done
+for each target volume.
+
+For cache volume overrides (via :file:`hosting.config`) this same process is
+used except with only the volumes stripes contained within the specified cache
+volume.
 
-If the oracle returns `READ` or `RW` for more than one tier, it must also 
return an ordering for those tiers (it may
-return an ordering for all tiers, ones that are not readable will be ignored). 
For each tier, in that order, a read of
-cache storage is attempted for the object. A successful read locks that tier 
as the provider of cached content. If no
-tier has a successful read, or no tier is marked `READ` or `RW` then it is a 
cache miss. Any tier marked `RW` that fails
-the read test is demoted to `WRITE`.
-
-If the object is cached every tier that returns `WRITE` receives the object to 
store in the selected volume (this
-includes `RW` returns that are demoted to `WRITE`). This is a cache to cache 
copy, not from the origin server. In this
-case tiers marked `RW` that are not tested for read will not receive any data 
and will not be further involved in the
-request processing.
-
-For a cache miss, all tiers marked `WRITE` will receive data from the origin 
server connection (if successful).
-
-This means, among other things, that if there is a tier with the object all 
other tiers that are written will get a
-local copy of the object, the origin server will not be used. In terms of 
implementation, currently a cache write to a
-volume is done via the construction of an instance of :cpp:class:`CacheVC` 
which recieves the object stream. For tiered storage the
-same thing is done for each target volume.
-
-For cache volume overrides (e.g. via :file:`hosting.config`) this same process 
is used except with only the volumes
-stripes contained within the specified cache volume.
-
--------
 Copying
 -------
 
-It may be necessary to provide a mechanism to copy objects between tiers 
outside of a client originated transaction. In
-terms of implementation this is straight forward using :cpp:class:`HttpTunnel` 
as if in a transaction only using a :cpp:class:`CacheVC`
-instance for both the producer and consumer. The more difficult question is 
what event would trigger a possible copy. A
-signal could be provided whenever a volume directory entry is deleted although 
it should be noted that the object in
-question may have already been evicted when this event happens.
+It may be necessary to provide a mechanism to copy objects between tiers 
outside
+of a client originated transaction. In terms of implementation, this is 
straight
+forward using :cpp:class:`HttpTunnel` as if in a transaction, only using a
+:cpp:class:`CacheVC` instance for both the producer and consumer. The more
+difficult question is what event would trigger a possible copy. A signal could
+be provided whenever a volume directory entry is deleted, although it should be
+noted that the object in question may have already been evicted when this event
+happens.
 
-----------------
 Additional Notes
 ----------------
 
-As an example use, it would be possible to have only one cache volume that 
uses tiered storage for a particular set of
-domains using volume tagging. :file:`hosting.config` would be used to direct 
those domains to the selected cache volume.
-The oracle would check the URL in parallel and return `NO_SALE` for the tiers 
in the target cache volume for other
-domains. For the other tier (that of the unmarked storage units) the oracle 
would return `RW` for the tier in all cases
-as that tier would not be queried for the target domains.
+As an example use, it would be possible to have only one cache volume that uses
+tiered storage for a particular set of domains using volume tagging.
+:file:`hosting.config` would be used to direct those domains to the selected
+cache volume. The oracle would check the URL in parallel and return ``NO_SALE``
+for the tiers in the target cache volume for other domains. For the other tier
+(that of the unmarked storage units), the oracle would return ``RW`` for the
+tier in all cases as that tier would not be queried for the target domains.
+

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/hacking/config-var-impl.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/hacking/config-var-impl.en.rst 
b/doc/arch/hacking/config-var-impl.en.rst
index 2f3a584..a0afcaf 100644
--- a/doc/arch/hacking/config-var-impl.en.rst
+++ b/doc/arch/hacking/config-var-impl.en.rst
@@ -44,25 +44,25 @@
 .. _RECU_DYNAMIC: recu-dynamic_
 
 
-=====================================
 Configuration Variable Implementation
 =====================================
 
-Adding a new configuration variable in :file:`records.config` requires a 
number of steps which are mostly documented
-here.
+Adding a new configuration variable in :file:`records.config` requires a number
+of steps which are mostly documented here.
 
-Before adding a new configuration variable, please discuss it on the mailing 
list. It will commonly be the case that a
-better name will be suggested or a more general approach to the problem which 
solves several different issues.
+Before adding a new configuration variable, please discuss it on the mailing
+list. It will commonly be the case that a better name, or a more general
+approach to the problem which solves several different issues, may be 
suggested.
 
-=====================================
 Defining the Variable
-=====================================
+=====================
 
-To begin the new configuration variables must be added to |RecordsConfig.cc|_. 
This contains a long array of
-configuration variable records. The fields for each record are
+To begin, the new configuration variables must be added to |RecordsConfig.cc|_.
+This contains a long array of configuration variable records. The fields for
+each record are:
 
 type:``RecT``
-   Type of record. There valid values are
+   Type of record. The valid values are:
 
    ``RECT_NULL``
       Undefined record.
@@ -85,22 +85,28 @@ type:``RecT``
    ``RECT_PLUGIN``
       Plugin created statistic.
 
-   In general ``RECT_CONFIG`` should be used unless it is required that the 
value not be shared among members of a
-   cluster in which case ``RECT_LOCAL`` should be used. If you use 
``RECT_LOCAL`` you must also start the line with ``LOCAL`` instead of 
``CONFIG`` and the name should use ``.local.`` instead of ``.config.``.
+   In general, ``RECT_CONFIG`` should be used unless it is required that the
+   value not be shared among members of a cluster, in which case ``RECT_LOCAL``
+   should be used. If you use ``RECT_LOCAL``, you must also start the line with
+   ``LOCAL`` instead of ``CONFIG`` and the name should use ``.local.`` instead
+   of ``.config.``.
 
 name:``char const*``
-   The fully qualified name of the configuration variable. Although there 
appears to be a hierarchial naming scheme,
-   that's just a convention, it is not actually used by the code. Nonetheless 
new variables should adhere to the
-   hierarchial scheme.
+   The fully qualified name of the configuration variable. Although there
+   appears to be a hierarchial naming scheme, that's just a convention, and it
+   is not actually used by the code. Nonetheless, new variables should adhere
+   to the hierarchial scheme.
 
 value_type:``RecDataT``
-   The data type of the value. It should be one of ``RECD_INT``, 
``RECD_STRING``, ``RECD_FLOAT`` as appropriate.
+   The data type of the value. It should be one of ``RECD_INT``,
+   ``RECD_STRING``, ``RECD_FLOAT`` as appropriate.
 
 default:``char const*``
-   The default value for the variable. This is always a string regardless of 
the *value_type*.
+   The default value for the variable. This is always a string regardless of
+   the *value_type*.
 
 update:``RecUpdateT``
-   Information about how the variable is updated. The valid values are
+   Information about how the variable is updated. The valid values are:
 
    ``RECU_NULL``
       Behavior is unknown or unspecified.
@@ -120,12 +126,14 @@ update:``RecUpdateT``
       The :ref:`traffic_cop` process must be restarted for a new value to take 
effect.
 
 required:``RecordRequiredType``
-   Effectively a boolean that specifies if the record is required to be 
present, with ``RR_NULL`` meaning not required
-   and ``RR_REQUIRED`` indicating that it is required. Given that using 
``RR_REQUIRED`` would be a major
+   Effectively a boolean that specifies if the record is required to be 
present,
+   with ``RR_NULL`` meaning not required and ``RR_REQUIRED`` indicating that it
+   is required. Given that using ``RR_REQUIRED`` would be a major
    incompatibility, ``RR_NULL`` is generally the better choice.
 
 check:``RecCheckT``
-   Additional type checking. It is unclear if this is actually implemented. 
The valid values are
+   Additional type checking. It is unclear if this is actually implemented. The
+   valid values are:
 
    ``RECC_NULL``
       No additional checking.
@@ -139,12 +147,15 @@ check:``RecCheckT``
    ``RECC_IP``
       Verify the value is an IP address. Unknown if this checks for IPv6.
 
+.. XXX confirm RECC_IP & IPv6 behavior
+
 pattern:``char const*``
-   Even more validity checking. This provides a regular expressions (PCRE 
format) for validating the value. This can be
+   This provides a regular expressions (PCRE format) for validating the value,
+   beyond the basic type validation performed by ``RecCheckT``. This can be
    ``NULL`` if there is no regular expression to use.
 
 access:``RecAccessT``
-   Access control. The valid values are
+   Access control. The valid values are:
 
    ``RECA_NULL``
       The value is read / write.
@@ -153,27 +164,34 @@ access:``RecAccessT``
       The value is read only.
 
    ``RECA_NO_ACCESS``
-      No access to the value - only privileged levels parts of the ATS can 
access the value.
+      No access to the value; only privileged level parts of ATS can access the
+      value.
 
-=====================================
 Variable Infrastructure
-=====================================
+=======================
 
-The primary effort in defining a configuration variable is handling updates, 
generally via :option:`traffic_line -x`. This
-is handled in a generic way, as described in the next section, or in a 
:ref:`more specialized way
-<http-config-var-impl>` (built on top of the generic mechanism) for HTTP 
related configuration variables. This is only
-needed if the variable is marked as dynamically updateable (|RECU_DYNAMIC|_) 
although HTTP configuration variables
-should be dynamic if possible.
+The primary effort in defining a configuration variable is handling updates,
+generally via :option:`traffic_line -x`. This is handled in a generic way, as
+described in the next section, or in a :ref:`more specialized way 
<http-config-var-impl>`
+(built on top of the generic mechanism) for HTTP related configuration
+variables. This is only needed if the variable is marked as dynamically
+updateable (|RECU_DYNAMIC|_) although HTTP configuration variables should be
+dynamic if possible.
 
---------------------------
 Documentation and Defaults
 --------------------------
 
-A configuration variable should be documented in :file:`records.config`. There 
are many examples  in the file already that can be used for guidance. The 
general format is to use the tag ::
+A configuration variable should be documented in :file:`records.config`. There
+are many examples in the file already that can be used for guidance. The 
general
+format is to use the tag ::
 
-   .. ts:cv::
+   .. ts:cv:`variable.name.here`
 
-The arguments to this are the same as for the configuration file. The 
documentation generator will pick out key bits and use them to decorate the 
entry. In particular if a value is present it will be removed and used as the 
default value. You can attach some additional options to the variable. These are
+The arguments to this are the same as for the configuration file. The
+documentation generator will pick out key bits and use them to decorate the
+entry. In particular if a value is present it will be removed and used as the
+default value. You can attach some additional options to the variable. These
+are:
 
 reloadable
    The variable can be reloaded via command line on a running Traffic Server.
@@ -186,98 +204,122 @@ deprecated
 
 .. topic:: Example
 
-   ::
-
+   \:ts\:cv\:\`custom.variable\`
       :reloadable:
       :metric: minutes
       :deprecated:
 
-If you need to refer to another configuration variable in the documentation, 
you can use the form ::
+If you need to refer to another configuration variable in the documentation, 
you
+can use the form ::
 
    :ts:cv:`the.full.name.of.the.variable`
 
-This will display the name as a link to the definition.
+This will display the name as a link to the full definition.
 
-In general a new configuration variable should not be present in the default 
:file:`records.config`. If it is added, such defaults should be added to the 
file ``proxy/config/records.config.default.in``. This is used to generate the 
default :file:`records.config`. Just add the variable to the file in an 
appropriate place with a proper default as this will now override whatever 
default you put in the code for new installs.
+In general, a new configuration variable should not be present in the default
+:file:`records.config`. If it is added, such defaults should be added to the
+file ``proxy/config/records.config.default.in``. This is used to generate the
+default :file:`records.config`. Just add the variable to the file in an
+appropriate place with a proper default as this will now override whatever
+default you put in the code for new installs.
 
-------------------------------
 Handling Updates
-------------------------------
+----------------
 
-The simplest mechanism for handling updates is the 
``REC_EstablishStaticConfigXXX`` family of functions. This mechanism
-will cause the value in the indicated instance to be updated in place when an 
update to :file:`records.config` occurs.
-This is done asynchronously using atomic operations. Use of these variables 
must keep that in mind.
+The simplest mechanism for handling updates is the 
``REC_EstablishStaticConfigXXX``
+family of functions. This mechanism will cause the value in the indicated
+instance to be updated in place when an update to :file:`records.config` 
occurs.
+This is done asynchronously using atomic operations. Use of these variables 
must
+keep that in mind.
 
-If a variable requires additional handling when updated a callback can be 
registered which is called when the variable
-is updated. This is what the ``REC_EstablishStaticConfigXXX`` calls do 
internally with a callback that simply reads the
-new value and writes it to storage indicated by the call parameters. The 
functions used are the ``link_XXX`` static
-functions in |RecCore.cc|_.
+If a variable requires additional handling when updated a callback can be
+registered which is called when the variable is updated. This is what the
+``REC_EstablishStaticConfigXXX`` calls do internally with a callback that 
simply
+reads the new value and writes it to storage indicated by the call parameters.
+The functions used are the ``link_XXX`` static functions in |RecCore.cc|_.
 
-To register a configuration variable callback, call 
``RecRegisterConfigUpdateCb`` with the arguments
+To register a configuration variable callback, call 
``RecRegisterConfigUpdateCb``
+with the arguments:
 
 ``char const*`` *name*
    The variable name.
 
 *callback*
-   A function with the signature ``<int (char const* name, RecDataT type, 
RecData data, void* cookie)>``. The *name*
-   value passed is the same as the *name* passed to the registration function 
as is the *cookie* argument. The *type* and
-   *data* are the new value for the variable. The return value is currently 
ignored. For future compatibility return
-   ``REC_ERR_OKAY``.
+   A function with the signature ``<int (char const* name, RecDataT type, 
RecData data, void* cookie)>``.
+   The :arg:`name` value passed is the same as the :arg:`name` passed to the
+   registration function as is the :arg:`cookie` argument. The :arg:`type` and
+   :arg:`data` are the new value for the variable. The return value is 
currently
+   ignored. For future compatibility return ``REC_ERR_OKAY``.
 
 ``void*`` *cookie*
-   A value passed to the *callback*. This is only for the callback, the 
internals simply store it and pass it on.
+   A value passed to the *callback*. This is only for the callback, the
+   internals simply store it and pass it on.
 
-*callback* is called under lock so it should be quick and not block. If that 
is necessary a continuation should be
-scheduled to handle the required action.
+*callback* is called under lock so it should be quick and not block. If that is
+necessary a :term:`continuation` should be scheduled to handle the required
+action.
 
 .. note::
-   The callback occurs asynchronously. For HTTP variables as described in the 
next section, this is handled by the more
-   specialized HTTP update mechanisms. Otherwise it is the implementor's 
responsibility to avoid race conditions.
+
+   The callback occurs asynchronously. For HTTP variables as described in the
+   next section, this is handled by the more specialized HTTP update 
mechanisms.
+   Otherwise it is the implementor's responsibility to avoid race conditions.
 
 .. _http-config-var-impl:
 
-------------------------
 HTTP Configuation Values
 ------------------------
 
-Variables used for HTTP processing should be declared as members of the 
``HTTPConfigParams`` structure (but :ref:`see
-<overridable-config-vars>`) and use the specialized HTTP update mechanisms 
which handle synchronization and
-initialization issues.
-
-The configuration logic maintains two copies of the ``HTTPConfigParams`` 
structure - the master copy and the current
-copy. The master copy is kept in the ``m_master`` member of the ``HttpConfig`` 
singleton. The current copy is kept in
-the ConfigProcessor. The goal is to provide a (somewhat) atomic update for 
configuration variables which are loaded
-individually in to the master copy as updates are received and then bulk 
copied to a new instance which is then swapped
-in as the current copy. The HTTP state machine interacts with this mechanism 
to avoid race conditions.
-
-For each variable a mapping between the variable name and the appropriate 
member in the master copy should be
-established between in the ``HTTPConfig::startup`` method. The 
``HttpEstablishStaticConfigXXX`` functions should be used
-unless there is an strong, explicit reason to not do so.
-
-The ``HTTPConfig::reconfigure`` method handles the current copy of the HTTP 
configuration variables. Logic should be
-added here to copy the value from the master copy to the current copy. 
Generally this will be a simple assignment. If
-there are dependencies between variables those should be enforced / checked in 
this method.
+Variables used for HTTP processing should be declared as members of the
+``HTTPConfigParams`` structure (but see :ref:`overridable-config-vars` for
+further details) and use the specialized HTTP update mechanisms which handle
+synchronization and initialization issues.
+
+The configuration logic maintains two copies of the ``HTTPConfigParams``
+structure, the master copy and the current copy. The master copy is kept in the
+``m_master`` member of the ``HttpConfig`` singleton. The current copy is kept 
in
+the ConfigProcessor. The goal is to provide a (somewhat) atomic update for
+configuration variables which are loaded individually in to the master copy as
+updates are received and then bulk copied to a new instance which is then
+swapped in as the current copy. The HTTP state machine interacts with this
+mechanism to avoid race conditions.
+
+For each variable, a mapping between the variable name and the appropriate
+member in the master copy should be established between in the 
``HTTPConfig::startup``
+method. The ``HttpEstablishStaticConfigXXX`` functions should be used unless
+there is a strong, explicit reason to not do so.
+
+The ``HTTPConfig::reconfigure`` method handles the current copy of the HTTP
+configuration variables. Logic should be added here to copy the value from the
+master copy to the current copy. Generally this will be a simple assignment. If
+there are dependencies between variables, those should be checked and enforced
+in this method.
 
 .. _overridable-config-vars:
 
------------------------
 Overridable Variables
------------------------
+---------------------
 
-HTTP related variables that are changeable per transaction are stored in the 
``OverridableHttpConfigParams`` structure,
-an instance of which is the ``oride`` member of ``HTTPConfigParams`` and 
therefore the points in the previous section
-still apply. The only difference for that is the further ``.oride`` in the 
structure references.
+HTTP related variables that are changeable per transaction are stored in the
+``OverridableHttpConfigParams`` structure, an instance of which is the 
``oride``
+member of ``HTTPConfigParams`` and therefore the points in the previous section
+still apply. The only difference for that is the further ``.oride`` in the
+structure references.
 
-In addition the variable is required to be accessible from the transaction 
API. In addition to any custom API functions
-used to access the value, the following items are required for generic access
+The variable is required to be accessible from the transaction API. In addition
+to any custom API functions used to access the value, the following items are
+required for generic access:
 
 #. Add a value to the ``TSOverridableConfigKey`` enumeration in |ts.h.in|_.
 
-#. Augment the ``TSHttpTxnConfigFind`` function to return this enumeration 
value when given the name of the configuration
-   variable. Be sure to count the charaters very carefully.
+#. Augment the ``TSHttpTxnConfigFind`` function to return this enumeration 
value
+   when given the name of the configuration variable. Be sure to count the
+   charaters very carefully.
+
+#. Augment the ``_conf_to_memberp`` function in |InkAPI.cc|_ to return a 
pointer
+   to the appropriate member of ``OverridableHttpConfigParams`` and set the 
type
+   if not a byte value.
 
-#. Augment the ``_conf_to_memberp`` function in |InkAPI.cc|_ to return a 
pointer to the appropriate member of
-   ``OverridableHttpConfigParams`` and set the type if not a byte value.
+#. Update the testing logic in |InkAPITest.cc|_ by adding the string name of 
the
+   configuration variable to the ``SDK_Overridable_Configs`` array.
 
-#. Update the testing logic in |InkAPITest.cc|_ by adding the string name of 
the configuration variable to the
-   ``SDK_Overridable_Configs`` array.

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/hacking/index.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/hacking/index.en.rst b/doc/arch/hacking/index.en.rst
index 0a5b52a..2fbb17c 100644
--- a/doc/arch/hacking/index.en.rst
+++ b/doc/arch/hacking/index.en.rst
@@ -3,26 +3,27 @@ Hacking
 
 .. Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
 
 Introduction
 ------------
 
-This is a documentation stub on how to hack Apache Traffic Server. Here we try 
to document things such as how to write
-and run unit or regression tests or how to inspect the state of the core with 
a debugger.
+This is a documentation stub on how to hack Apache Traffic Server. Here we try
+to document things such as how to write and run unit or regression tests or how
+to inspect the state of the core with a debugger.
 
 .. toctree::
    :maxdepth: 2

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/hacking/release-process.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/hacking/release-process.en.rst 
b/doc/arch/hacking/release-process.en.rst
index b74c75c..5ef784e 100644
--- a/doc/arch/hacking/release-process.en.rst
+++ b/doc/arch/hacking/release-process.en.rst
@@ -16,77 +16,94 @@
    under the License.
 
 
-==============================
 Traffic Server Release Process
 ==============================
 
-Managing release is easiest in an environment that is as clean as possible. 
For this reason cloning the code base in to a new directory for the release 
process is recommended.
+Managing a release is easiest in an environment that is as clean as possible.
+For this reason, cloning the code base in to a new directory for the release
+process is recommended.
 
-------------
 Requirements
 ------------
 
 * A system for git and building.
-* A cryptographic key that has been signed by at least two other PMC members. 
This should be preferentially associated with your ``apache.org`` email address 
but that is not required.
+
+* A cryptographic key that has been signed by at least two other PMC members.
+  This should be preferentially associated with your ``apache.org`` email
+  address but that is not required.
 
 .. _release-management-release-candidate:
 
------------------
 Release Candidate
 -----------------
 
-The first step in a release is making a release candidate. This is distributed 
to the community for validation before the actual release.
+The first step in a release is making a release candidate. This is distributed
+to the community for validation before the actual release.
 
 Document
 --------
 
-Gather up information about the changes for the release. The ``CHANGES`` file 
is a good starting point. You may also
-want to check the commits since the last release. The primary purpose of this 
is to generate a list of the important
+Gather up information about the changes for the release. The ``CHANGES`` file
+is a good starting point. You may also want to check the commits since the last
+release. The primary purpose of this is to generate a list of the important
 changes since the last release.
 
-Create or update a page on the Wiki for the release. If it is a major or minor 
release it should have its own page. Use
-the previous release page as a template. Point releases should get a section 
at the end of the corresponding release
-page.
+Create or update a page on the Wiki for the release. If it is a major or minor
+release it should have its own page. Use the previous release page as a
+template. Point releases should get a section at the end of the corresponding
+release page.
 
-Write an announcement for the release. This will contain much of the same 
information that is on the Wiki page but more
-concisely. Check the `mailing list archives 
<http://mail-archives.apache.org/mod_mbox/trafficserver-dev/>`_ for examples to 
use as a base.
+Write an announcement for the release. This will contain much of the same
+information that is on the Wiki page but more concisely. Check the
+`mailing list archives 
<http://mail-archives.apache.org/mod_mbox/trafficserver-dev/>`_
+for examples to use as a base.
 
 Build
 -----
 
-Go to the top level source directory.
+#. Go to the top level source directory.
+
+#. Check the version in ``configure.ac``. There are two values near the top 
that
+   need to be set, ``TS_VERSION_S`` and ``TS_VERSION_N``. These are the release
+   version number in different encodings.
 
-* Check the version in ``configure.ac``. There are two values near the top 
that need to be set, ``TS_VERSION_S`` and
-  ``TS_VERSION_N``. These are the release version number in different 
encodings.
-* Check the variable ``RC`` in the top level ``Makefile.am``. This should be 
the point release value. This needs to be changed for every release candidate. 
The first release candidate is 0 (zero).
+#. Check the variable ``RC`` in the top level ``Makefile.am``. This should be
+   the point release value. This needs to be changed for every release
+   candidate. The first release candidate is ``0`` (zero).
 
-Execute the following commands to make the distribution files. ::
+#. Execute the following commands to make the distribution files. ::
 
-   autoreconf -i
-   ./configure
-   make rel-candidate
+      autoreconf -i
+      ./configure
+      make rel-candidate
 
-This will create the distribution files and sign them using your key. Expect 
to be prompted twice for your passphrase
-unless you use an ssh key agent. If you have multiple keys you will need to 
set the default appropriately beforehand, as
-no option will be provided to select the signing key. The files should have 
names that start
-with ``trafficserver-X.Y.Z-rcA.tar.bz2`` where ``X.Y.Z`` is the version and 
``A`` is the release candidate counter. There
-should be four such files, one with no extension and three others with the 
extensions ``asc``, ``md5``, and ``sha1``. This will also create a signed git 
tag of the form ``X.Y.Z-rcA``.
+These steps will create the distribution files and sign them using your key.
+Expect to be prompted twice for your passphrase unless you use an ssh key 
agent.
+If you have multiple keys you will need to set the default appropriately
+beforehand, as no option will be provided to select the signing key. The files
+should have names that start with ``trafficserver-X.Y.Z-rcA.tar.bz2`` where
+``X.Y.Z`` is the version and ``A`` is the release candidate counter. There
+should be four such files, one with no extension and three others with the
+extensions ``asc``, ``md5``, and ``sha1``. This will also create a signed git
+tag of the form ``X.Y.Z-rcA``.
 
 Distribute
 ----------
 
-The release candidate files should be uploaded to some public storage. Your 
personal storage on ``people.apach.org`` is
-a reasonable location to use.
+The release candidate files should be uploaded to some public storage. Your
+personal storage on *people.apach.org* is a reasonable location to use.
 
-Send the release candiate announcement to the ``users`` and ``dev`` mailinging 
lists, noting that it is a release
-*candidate* and providing a link to the distribution files you uploaded. This 
announcement should also call for a vote
+Send the release candiate announcement to the *users* and *dev* mailinging
+lists, noting that it is a release *candidate* and providing a link to the
+distribution files you uploaded. This announcement should also call for a vote
 on the candidate, generally with a 72 hours time limit.
 
-If the voting was successful (at least three "+1" votes and no "-1" votes) 
proceed to :ref:`release-management-official-release`. Otherwise repeat the 
:ref:`release-management-release-candidate` process.
+If the voting was successful (at least three "+1" votes and no "-1" votes),
+proceed to :ref:`release-management-official-release`. Otherwise, repeat the
+:ref:`release-management-release-candidate` process.
 
 .. _release-management-official-release:
 
-----------------
 Official Release
 ----------------
 
@@ -94,30 +111,36 @@ Build the distribution files with the command ::
 
    make release
 
-Be sure to not have changed anything since the release candidate was built so 
the checksums are identical. This will
-create a signed git tag of the form ``X.Y.Z`` and produce the distribution 
files. Push the tag to the ASF repository with
-the command ::
+Be sure to not have changed anything since the release candidate was built so
+the checksums are identical. This will create a signed git tag of the form
+``X.Y.Z`` and produce the distribution files. Push the tag to the ASF 
repository
+with the command ::
 
    git push origin X.Y.Z
 
-This presumes ``origin`` is the name for the ASF remote repository which is 
correct if you originally clone from the ASF
-repository.
+This presumes ``origin`` is the name for the ASF remote repository which is
+correct if you originally clone from the ASF repository.
 
-The distribution files must be added to an SVN repository. This can be 
accessed with the command::
+The distribution files must be added to an SVN repository. This can be accessed
+with the command::
 
    svn co https://dist.apache.org/repos/dist/release/trafficserver 
<local-directory>
 
-All four of the distribution files go here. If you are making a point release 
then you should also remove the distribution
-files for the previous release. Allow 24 hours for the files to be distributed 
through the ASF infrastructure.
+All four of the distribution files go here. If you are making a point release
+then you should also remove the distribution files for the previous release.
+Allow 24 hours for the files to be distributed through the ASF infrastructure.
 
-The Traffic Server website must be updated. This is an SVN repository which 
you can access with ::
+The Traffic Server website must be updated. This is an SVN repository which you
+can access with ::
 
    svn co https://svn.apache.org/repos/asf/trafficserver/site/trunk 
<local-directory>
 
 The files of interest are in the ``content`` directory.
 
 ``index.html``
-   This is the front page. The places to edit here are any security 
announcements at the top and the "News" section.
+   This is the front page. The places to edit here are any security
+   announcements at the top and the "News" section.
+
 ``downloads.en.mdtext``
    Update the downloads page to point to the new download objects.
 
@@ -125,15 +148,24 @@ After making changes, commit them and then run ::
 
    publish.pl trafficserver <apache-id>
 
-on the ``people.apache.org`` host.
+On the ``people.apache.org`` host.
+
+If needed, update the Wiki page for the release to point at the release
+distribution files.
+
+Update the announcement, if needed, to refer to the release distribution files
+and remove the comments concerning the release candidate. This announcement
+should be sent to the *users* and *dev* mailing lists. It should also be sent
+to the ASF announcement list, which must be done using an ``apache.org`` email
+address.
+
+Finally, update various files after the release:
+
+* The ``STATUS`` file for master and for the release branch to include this 
version.
 
-If needed, update the Wiki page for the release to point at the release 
distribution files.
+* The ``CHANGES`` file to have a header for the next version.
 
-Update the announcement if needed to refer to the release distribution files 
and remove the comments concerning the release candidate. This announcement 
should be sent to the ``users`` and ``dev`` mailing lists. It should also be 
sent to the ASF announcement list, which must be done using an ``apache.org`` 
email address.
+* ``configure.ac`` to be set to the next version.
 
-Finally, update various files after the release.
+* In the top level ``Makefile.am`` change ``RC`` to have the value ``0``.
 
-   * The ``STATUS`` file for master and for the release branch to include this 
version.
-   * The ``CHANGES`` file to have a header for the next version.
-   * ``configure.ac`` to be set to the next version.
-   * In the top level ``Makefile.am`` change ``RC`` to have the value ``0``.

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/index.en.rst
----------------------------------------------------------------------
diff --git a/doc/arch/index.en.rst b/doc/arch/index.en.rst
index 6cc9fac..2ed51dc 100644
--- a/doc/arch/index.en.rst
+++ b/doc/arch/index.en.rst
@@ -3,29 +3,32 @@ Architecture and Hacking
 
 .. Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  "License"); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
 
 Introduction
---------------
+------------
 
-The original architectural documents for Traffic Server were lost in the 
transition to an open source project. The
-documents in this section are provisional and were written based on the 
existing code. The purpose is to have a high
-level description of aspects of Traffic Server to better inform ongoing work.
+The original architectural documents for Traffic Server were lost in the
+transition to an open source project. The documents in this section are
+provisional and were written based on the existing code. The purpose is to have
+a high level description of aspects of Traffic Server to better inform ongoing
+work.
 
-In the final section on "hacking" we try to document our approaches to 
understanding and modifying the source.
+In the final section on "hacking" we try to document our approaches to
+understanding and modifying the source.
 
 Contents:
 

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/glossary.en.rst
----------------------------------------------------------------------
diff --git a/doc/glossary.en.rst b/doc/glossary.en.rst
index 94996db..758523e 100644
--- a/doc/glossary.en.rst
+++ b/doc/glossary.en.rst
@@ -99,3 +99,25 @@ Glossary
       The unit of storage in the cache. All reads from the cache always read 
exactly one fragment. Fragments may be
       written in groups, but every write is always an integral number of 
fragments. Each fragment has a corresponding
       :term:`directory entry` which describes its location in the cache 
storage.
+
+   object store
+      The database of :term:`cache objects <cache object>`.
+
+   fresh
+      The state of a :term:`cache object` which can be served directly from the
+      the cache in response to client requests. Fresh objects have not met or
+      passed their :term:`origin server` defined expiration time, nor have they
+      reached the algorithmically determined :term:`stale` age.
+
+   stale
+      The state of a :term:`cache object` which is not yet expired, but has
+      reached an algorithmically determined age at which the :term:`origin 
server`
+      will be contacted to :term:`revalidate <revalidation>` the freshness of
+      the object. Contrast with :term:`fresh`.
+
+   origin server
+      An HTTP server which provides the original source of content being cached
+      by Traffic Server.
+
+   cache partition
+

http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/reference/configuration/records.config.en.rst
----------------------------------------------------------------------
diff --git a/doc/reference/configuration/records.config.en.rst 
b/doc/reference/configuration/records.config.en.rst
index 6397461..72bb3d3 100644
--- a/doc/reference/configuration/records.config.en.rst
+++ b/doc/reference/configuration/records.config.en.rst
@@ -1334,9 +1334,9 @@ Cache Control
 
 .. ts:cv:: CONFIG proxy.config.cache.target_fragment_size INT 1048576
 
-   Sets the target size of a contiguous fragment of a file in the disk cache. 
Accepts values that are powers of 2, e.g. 65536, 131072,
-   262144, 524288, 1048576, 2097152, etc. When setting this, consider that 
larger numbers could waste memory on slow connections,
-   but smaller numbers could increase (waste) seeks.
+   Sets the target size of a contiguous fragment of a file in the disk cache.
+   When setting this, consider that larger numbers could waste memory on slow
+   connections, but smaller numbers could increase (waste) seeks.
 
 RAM Cache
 =========

Reply via email to