docs: update focused on architecture documentation
Project: http://git-wip-us.apache.org/repos/asf/trafficserver/repo Commit: http://git-wip-us.apache.org/repos/asf/trafficserver/commit/aa37d0ab Tree: http://git-wip-us.apache.org/repos/asf/trafficserver/tree/aa37d0ab Diff: http://git-wip-us.apache.org/repos/asf/trafficserver/diff/aa37d0ab Branch: refs/heads/master Commit: aa37d0ab553ef2f187c4742544248695701202af Parents: 1f1e2ae Author: Jon Sime <[email protected]> Authored: Tue Nov 18 12:34:08 2014 -0800 Committer: James Peach <[email protected]> Committed: Wed Dec 10 13:35:40 2014 -0800 ---------------------------------------------------------------------- doc/admin/cluster-howto.en.rst | 2 +- doc/admin/configuring-cache.en.rst | 94 +- doc/arch/cache/cache-api.en.rst | 24 +- doc/arch/cache/cache-appendix.en.rst | 143 +- doc/arch/cache/cache-arch.en.rst | 1284 +++++++++++------- doc/arch/cache/cache-data-structures.en.rst | 117 +- doc/arch/cache/cache.en.rst | 28 +- doc/arch/cache/ram-cache.en.rst | 165 ++- doc/arch/cache/tier-storage.en.rst | 165 ++- doc/arch/hacking/config-var-impl.en.rst | 222 +-- doc/arch/hacking/index.en.rst | 27 +- doc/arch/hacking/release-process.en.rst | 132 +- doc/arch/index.en.rst | 35 +- doc/glossary.en.rst | 22 + .../configuration/records.config.en.rst | 6 +- 15 files changed, 1560 insertions(+), 906 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/admin/cluster-howto.en.rst ---------------------------------------------------------------------- diff --git a/doc/admin/cluster-howto.en.rst b/doc/admin/cluster-howto.en.rst index 1c00616..a34ead0 100644 --- a/doc/admin/cluster-howto.en.rst +++ b/doc/admin/cluster-howto.en.rst @@ -140,7 +140,7 @@ cluster, for example:: 127.1.2.5:80 After successfully joining a cluster, all changes of global configurations -performed on any node in that cluster will take effect on **all** nodes, removing +performed on any node in that cluster will take effect on all nodes, removing the need to manually duplicate configuration changes across each node individually. Deleting Nodes from a Cluster http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/admin/configuring-cache.en.rst ---------------------------------------------------------------------- diff --git a/doc/admin/configuring-cache.en.rst b/doc/admin/configuring-cache.en.rst index dc009d2..22a31e0 100644 --- a/doc/admin/configuring-cache.en.rst +++ b/doc/admin/configuring-cache.en.rst @@ -21,8 +21,8 @@ Configuring the Cache under the License. The Traffic Server cache consists of a high-speed object database called -the *object store* that indexes objects according to URLs and their -associated headers. +the :term:`object store` that indexes :term:`cache objects <cache object>` +according to URLs and their associated headers. .. toctree:: :maxdepth: 2 @@ -31,16 +31,16 @@ The Traffic Server Cache ======================== The Traffic Server cache consists of a high-speed object database called -the *object store*. The object store indexes objects according to URLs -and associated headers. This enables Traffic Server to store, retrieve, -and serve not only web pages, but also parts of web pages - which -provides optimum bandwidth savings. Using sophisticated object -management, the object store can cache alternate versions of the same -object (versions may differ because of dissimilar language or encoding -types). It can also efficiently store very small and very large -documents, thereby minimizing wasted space. When the cache is full, -Traffic Server removes stale data to ensure the most requested objects -are kept readily available and fresh. +the :term:`object store`. The object store indexes +:term:`cache objects <cache object>` according to URLs and associated headers. +This enables Traffic Server to store, retrieve, and serve not only web pages, +but also parts of web pages - which provides optimum bandwidth savings. Using +sophisticated object management, the object store can cache +:term:`alternate` versions of the same object (versions may differ because of +dissimilar language or encoding types). It can also efficiently store very +small and very large documents, thereby minimizing wasted space. When the +cache is full, Traffic Server removes :term:`stale` data to ensure the most +requested objects are kept readily available and fresh. Traffic Server is designed to tolerate total disk failures on any of the cache disks. If the disk fails completely, then Traffic Server marks the @@ -50,11 +50,15 @@ fail, then Traffic Server goes into proxy-only mode. You can perform the following cache configuration tasks: -- Change the total amount of disk space allocated to the cache: refer +- Change the total amount of disk space allocated to the cache; refer to `Changing Cache Capacity`_. + - Partition the cache by reserving cache disk space for specific - protocols and origin servers/domains; refer to `Partitioning the Cache`_. + protocols and :term:`origin servers/domains <origin server>`; refer to + `Partitioning the Cache`_. + - Delete all data in the cache; refer to `Clearing the Cache`_. + - Override cache directives for a requested domain name, regex on a url, hostname or ip, with extra filters for time, port, method of the request, and more. ATS can be configured to never cache, always cache, @@ -85,7 +89,7 @@ resistance against this problem. In addition, *CLFUS* also supports compressing in the RAM cache itself. This can be useful for content which is not compressed by itself (e.g. images). This should not be confused with ``Content-Encoding: gzip``, this -feature is only thereto save space internally in the RAM cache itself. As +feature is only present to save space internally in the RAM cache itself. As such, it is completely transparent to the User-Agent. The RAM cache compression is enabled with the option :ts:cv:`proxy.config.cache.ram_cache.compress`. @@ -101,7 +105,6 @@ Value Meaning 3 *liblzma* compression ======= ============================= - .. _changing-the-size-of-the-ram-cache: Changing the Size of the RAM Cache @@ -109,10 +112,10 @@ Changing the Size of the RAM Cache Traffic Server provides a dedicated RAM cache for fast retrieval of popular small objects. The default RAM cache size is automatically -calculated based on the number and size of the cache partitions you have -configured. If you've partitioned your cache according to protocol -and/or hosts, then the size of the RAM cache for each partition is -proportional to the size of that partition. +calculated based on the number and size of the +:term:`cache partitions <cache partition>` you have configured. If you've +partitioned your cache according to protocol and/or hosts, then the size of +the RAM cache for each partition is proportional to the size of that partition. You can increase the RAM cache size for better cache hit performance. However, if you increase the size of the RAM cache and observe a @@ -124,10 +127,12 @@ its previous value. To change the RAM cache size: #. Stop Traffic Server. + #. Set the variable :ts:cv:`proxy.config.cache.ram_cache.size` to specify the size of the RAM cache. The default value of ``-1`` means that the RAM cache is automatically sized at approximately 1MB per gigabyte of disk. + #. Restart Traffic Server. If you increase the RAM cache to a size of 1GB or more, then restart with the :program:`trafficserver` command (refer to :ref:`start-traffic-server`). @@ -146,9 +151,12 @@ To increase the total amount of disk space allocated to the cache on existing disks, or to add new disks to a Traffic Server node: #. Stop Traffic Server. + #. Add hardware, if necessary. + #. Edit :file:`storage.config` to increase the amount of disk space allocated to the cache on existing disks or describe the new hardware you are adding. + #. Restart Traffic Server. Reducing Cache Capacity @@ -158,9 +166,12 @@ To reduce the total amount of disk space allocated to the cache on an existing disk, or to remove disks from a Traffic Server node: #. Stop Traffic Server. + #. Remove hardware, if necessary. + #. Edit :file:`storage.config` to reduce the amount of disk space allocated to the cache on existing disks or delete the reference to the hardware you're removing. + #. Restart Traffic Server. .. important:: In :file:`storage.config`, a formatted or raw disk must be at least 128 MB. @@ -171,27 +182,32 @@ Partitioning the Cache ====================== You can manage your cache space more efficiently and restrict disk usage -by creating cache volumes with different sizes for specific protocols. -You can further configure these volumes to store data from specific -origin servers and/or domains. The volume configuration must be the same -on all nodes in a :ref:`cluster <traffic-server-cluster>`. +by creating :term:`cache volumes <cache volume>` with different sizes for +specific protocols. You can further configure these volumes to store data from +specific :term:`origin servers <origin server>` and/or domains. The volume +configuration must be the same on all nodes in a :ref:`cluster <traffic-server-cluster>`. Creating Cache Partitions for Specific Protocols ------------------------------------------------ -You can create separate volumes for your cache that vary in size to -store content according to protocol. This ensures that a certain amount -of disk space is always available for a particular protocol. Traffic -Server currently supports the ``http`` partition type for HTTP objects. - -.. XXX: but not https? +You can create separate :term:`volumes <cache volume>` for your cache that vary +in size to store content according to protocol. This ensures that a certain +amount of disk space is always available for a particular protocol. Traffic +Server currently supports only the ``http`` partition type. To partition the cache according to protocol: -#. Enter a line in the :file:`volume.config` file for - each volume you want to create +#. Enter a line in :file:`volume.config` for each volume you want to create. :: + + volume=1 scheme=http size=50% + volume=2 scheme=http size=50% + #. Restart Traffic Server. +.. important:: + + Volume definitions must be the same across all nodes in a cluster. + Making Changes to Partition Sizes and Protocols ----------------------------------------------- @@ -201,13 +217,17 @@ note the following: - You must stop Traffic Server before you change the cache volume size and protocol assignment. + - When you increase the size of a volume, the contents of the volume are *not* deleted. However, when you reduce the size of a volume, the contents of the volume *are* deleted. + - When you change the volume number, the volume is deleted and then recreated, even if the size and protocol type remain the same. + - When you add new disks to your Traffic Server node, volume sizes specified in percentages will increase proportionately. + - Substantial changes to volume sizes can result in disk fragmentation, which affects performance and cache hit rate. You should clear the cache before making many changes to cache volume sizes (refer to `Clearing the Cache`_). @@ -232,11 +252,11 @@ then Traffic Server will run in proxy-only mode. .. note:: - You do not need to stop Traffic Server before you assign - volumes to particular hosts or domains. However, this type of - configuration is time-consuming and can cause a spike in memory usage. - Therefore, it's best to configure partition assignment during periods of - low traffic. + You do not need to stop Traffic Server before you assign volumes + to particular hosts or domains. However, this type of configuration + is time-consuming and can cause a spike in memory usage. + Therefore, it's best to configure partition assignment during + periods of low traffic. To partition the cache according to hostname and domain: http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/cache/cache-api.en.rst ---------------------------------------------------------------------- diff --git a/doc/arch/cache/cache-api.en.rst b/doc/arch/cache/cache-api.en.rst index 7a9963e..2cdd615 100644 --- a/doc/arch/cache/cache-api.en.rst +++ b/doc/arch/cache/cache-api.en.rst @@ -5,9 +5,9 @@ to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - + http://www.apache.org/licenses/LICENSE-2.0 - + Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY @@ -23,20 +23,28 @@ Cache Related API functions .. c:function:: void TSHttpTxnReqCacheableSet(TSHttpTxn txnp, int flag) - Set a *flag* that marks a request as cacheable. This is a positive override only, setting *flag* to 0 restores the default behavior, it does not force the request to be uncacheable. + Set a flag that marks a request as cacheable. This is a positive override + only, setting :c:arg:``flag`` to ``0`` restores the default behavior, it does not + force the request to be uncacheable. .. c:function:: TSReturnCode TSCacheUrlSet(TSHttpTxn txnp, char const* url, int length) - Set the cache key for the transaction *txnp* as the string pointed at by *url* of *length* characters. It need not be ``null`` terminated. This should be called from ``TS_HTTP_READ_REQUEST_HDR_HOOK`` which is before cache lookup but late enough that the HTTP request header is available. + Set the cache key for the transaction :c:arg:``txnp`` as the string pointed at by + :c:arg:``url`` of :c:arg:``length`` characters. It need not be NUL-terminated. This should + be called from ``TS_HTTP_READ_REQUEST_HDR_HOOK`` which is before cache lookup + but late enough that the HTTP request header is available. =============== Cache Internals =============== -.. cpp:function:: int DIR_SIZE_WITH_BLOCK(int big) +.. cpp:function:: int DIR_SIZE_WITH_BLOCK(int big) - A preprocessor macro which computes the maximum size of a fragment based on the value of *big*. This is computed as if the argument where the value of the *big* field in a struct :cpp:class:`Dir`. + A preprocessor macro which computes the maximum size of a fragment based on + the value of :cpp:arg:``big``. This is computed as if the argument where the value of + the :cpp:arg:``big`` field in a struct :cpp:class:`Dir`. -.. cpp:function:: int DIR_BLOCK_SIZE(int big) +.. cpp:function:: int DIR_BLOCK_SIZE(int big) - A preprocessor macro which computes the block size multiplier for a struct :cpp:class:`Dir` where *big* is the *big* field value. + A preprocessor macro which computes the block size multiplier for a struct + :cpp:class:`Dir` where :cpp:arg:``big`` is the :cpp:arg:``big`` field value. http://git-wip-us.apache.org/repos/asf/trafficserver/blob/aa37d0ab/doc/arch/cache/cache-appendix.en.rst ---------------------------------------------------------------------- diff --git a/doc/arch/cache/cache-appendix.en.rst b/doc/arch/cache/cache-appendix.en.rst index 2b69adc..54eda3a 100644 --- a/doc/arch/cache/cache-appendix.en.rst +++ b/doc/arch/cache/cache-appendix.en.rst @@ -5,9 +5,9 @@ to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - + http://www.apache.org/licenses/LICENSE-2.0 - + Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY @@ -33,67 +33,142 @@ Topics to be done Cache Consistency ~~~~~~~~~~~~~~~~~ -The cache is completely consistent, up to and including kicking the power cord out, if the write buffer on consumer disk drives is disabled. You need to use:: +The cache is completely consistent, up to and including kicking the power cord +out, if the write buffer on consumer disk drives is disabled. You need to use:: hdparm -W0 -The cache validates that all the data for the document is available and will silently mark a partial document as a "miss" on read. There is no "gentle" shutdown for traffic server, you just kill the process, so the "recovery" code (fsck) is run every time traffic server starts up. +The cache validates that all the data for the document is available and will +silently mark a partial document as a miss on read. There is no gentle +shutdown for Traffic Server. You simply kill the process and the recovery code +(fsck) is run every time Traffic Server starts up. -On startup the two versions of the index are checked, and the last valid one is read into memory. Then traffic server moves forward from the last snapped write cursor and reads all the fragments written to disk, and updates the directory (as in a log-based file system). It stops reading at the write before the last valid write header it sees (as a write is not necessarily atomic because of sector reordering). Then the new updated index is written to the invalid version (in case of a crash during startup) and the system starts. +On startup the two versions of the index are checked, and the last valid one is +read into memory. |TS| then moves forward from the last snapped write +cursor and reads all the fragments written to disk and updates the directory +(as in a log-based file system). It stops reading at the write before the last +valid write header it sees (as a write is not necessarily atomic because of +sector reordering). Then the new updated index is written to the invalid +version (in case of a crash during startup) and the system starts. .. _volume tagging: Volume Tagging ~~~~~~~~~~~~~~ -Currently cache volumes are allocated somewhat arbitrarily from storage elements. `This enhancement <https://issues.apache.org/jira/browse/TS-1728>`__ allows the :file:`storage.config` file to assign storage units to specific volumes although the volumes must still be listed in :file:`volume.config` in general and in particular to map domains to specific volumes. A primary use case for this is to be able to map specific types of content to different storage elements. This could to have different storage devices for the content (SSD vs. rotational). +Currently, :term:`cache volumes <cache volume>` are allocated somewhat +arbitrarily from storage elements. `This enhancement <https://issues.apache.org/jira/browse/TS-1728>`__ +allows :file:`storage.config` to assign :term:`storage units <storage unit>` to +specific :term:`volumes <cache volume>` although the volumes must still be +listed in :file:`volume.config` in general and in particular to map domains to +specific volumes. A primary use case for this is to be able to map specific +types of content to different storage elements. This can be employed to have +different storage devices for various types of content (SSD vs. rotational). --------------- Version Upgrade --------------- -It is currently the case that any change to the cache format will clear the cache. This is an issue when upgrading the |TS| version and should be kept in mind. +It is currently the case that any change to the cache format will clear the +cache. This is an issue when upgrading the |TS| version and should be kept in mind. -.. cache-key: +.. _cache-key: ------------------------- Controlling the cache key ------------------------- -The cache key is by default the URL of the request. There are two possible choices, the original ("pristine") URL and the remapped URL. Which of these is used is determined by the configuration value :ts:cv:`proxy.config.url_remap.pristine_host_hdr`. - -This is an ``INT`` value. If set to ``0`` (disabled) then the remapped URL is used, and if it is not ``0`` (enabled) then the original URL is used. This setting also controls the value of the ``HOST`` header that is placed in the request sent to the origin server, using hostname from the original URL if non-``0`` and the host name from the remapped URL if ``0``. It has no other effects. - -For caching, this setting is irrelevant if no remapping is done or there is a one to one mapping between the original and remapped URLs. - -It becomes significant if multiple original URLs are mapped to the same remapped URL. If pristine headers are enabled requests to different original URLs will be stored as distinct objects in the cache. If disabled the remapped URL will be used and there may be collisions. This is bad if the contents different but quite useful if they are the same (e.g., the original URLs are just aliases for the same underlying server). - -This is also an issue if a remapping is changed because it is effectively a time axis version of the previous case. If an original URL is remapped to a different server address then the setting determines if existing cached objects will be served for new requests (enabled) or not (disabled). Similarly if the original URL mapped to a particular URL is changed then cached objects from the initial original URL will be served from the updated original URL if pristine headers is disabled. - -These collisions are not of themselves good or bad. An administrator needs to decide which is appropriate for their situation and set the value correspondingly. - -If a greater degree of control is desired a plugin must be used to invoke the API call :c:func:`TSCacheUrlSet()` to provide a specific cache key. The :c:func:`TSCacheUrlSet()` API can be called as early as ``TS_HTTP_READ_REQUEST_HDR_HOOK``, but no later than ``TS_HTTP_POST_REMAP_HOOK``. It can be called only once per transaction; calling it multiple times has no additional effect. - -A plugin that changes the cache key *must* do so consistently for both cache hit and cache miss requests because two different requests that map to the same cache key will be considered equivalent by the cache. Use of the URL directly provides this and so must any substitute. This is entirely the responsibility of the plugin, there is no way for the |TS| core to detect such an occurrence. - -If :c:func:`TSHttpTxnCacheLookupUrlGet()` is called after new cache url set by :c:func:`TSCacheUrlSet()`, it should use a URL location created by :c:func:`TSUrlCreate()` as its 3rd input parameter instead of getting url_loc from client request. - -It is a requirement that the string be syntactically a URL but otherwise it is completely arbitrary and need not have any path. For instance if the company Network Geographics wanted to store certain content under its own cache key, using a document GUID as part of the key, it could use a cache key like :: +The :term:`cache key` is by default the URL of the request. There are two +possible choices, the original (pristine) URL and the remapped URL. Which of +these is used is determined by the configuration value +:ts:cv:`proxy.config.url_remap.pristine_host_hdr`. + +This is an ``INT`` value. If set to ``0`` (disabled) then the remapped URL is +used, and if it is not ``0`` (enabled) then the original URL is used. This +setting also controls the value of the ``HOST`` header that is placed in the +request sent to the :term:`origin server`, using the hostname from the original +URL if not ``0`` and the host name from the remapped URL if ``0``. It has no +other effects. + +For caching, this setting is irrelevant if no remapping is done or there is a +one-to-one mapping between the original and remapped URLs. + +It becomes significant if multiple original URLs are mapped to the same +remapped URL. If pristine headers are enabled, requests to different original +URLs will be stored as distinct :term:`objects <cache object>` in the cache. If +disabled, the remapped URL will be used and there may be collisions. This is +bad if the contents different, but quite useful if they are the same (as in +situations where the original URLs are just aliases for the same underlying +server resource). + +This is also an issue if a remapping is changed because it is effectively a +time axis version of the previous case. If an original URL is remapped to a +different server address then the setting determines if existing cached objects +will be served for new requests (enabled) or not (disabled). Similarly, if the +original URL mapped to a particular URL is changed then cached objects from the +initial original URL will be served from the updated original URL if pristine +headers is disabled. + +These collisions are not by themselves good or bad. An administrator needs to +decide which is appropriate for their situation and set the value correspondingly. + +If a greater degree of control is desired, a plugin must be used to invoke the +API call :c:func:`TSCacheUrlSet()` to provide a specific :term:`cache key`. The +:c:func:`TSCacheUrlSet()` API can be called as early as +``TS_HTTP_READ_REQUEST_HDR_HOOK`` but no later than ``TS_HTTP_POST_REMAP_HOOK``. +It can be called only once per transaction; calling it multiple times has no +additional effect. + +A plugin that changes the cache key must do so consistently for both cache hit +and cache miss requests because two different requests that map to the same +cache key will be considered equivalent by the cache. Use of the URL directly +provides this and so must any substitute. This is entirely the responsibility +of the plugin; there is no way for the |TS| core to detect such an occurrence. + +If :c:func:`TSHttpTxnCacheLookupUrlGet()` is called after new cache url set by +:c:func:`TSCacheUrlSet()`, it should use a URL location created by +:c:func:`TSUrlCreate()` as its third input parameter instead of getting +``url_loc`` from the client request. + +It is a requirement that the string be syntactically a URL but otherwise it is +completely arbitrary and need not have any path. For instance, if the company +Network Geographics wanted to store certain content under its own +:term:`cache key`, using a document GUID as part of the key, it could use a +cache key like :: ngeo://W39WaGTPnvg -The scheme ``ngeo`` was picked because it is *not* a valid URL scheme and so will not collide with any valid URL. +The scheme ``ngeo`` was picked specifically because it is not a valid URL +scheme, and so will never collide with any valid URL. -This can be useful if the URL encodes both important and unimportant data. Instead of storing potentially identical content under different URLs (because they differ on the unimportant parts) a url containing only the important parts could be created and used. +This can be useful if the URL encodes both important and unimportant data. +Instead of storing potentially identical content under different URLs (because +they differ on the unimportant parts) a url containing only the important parts +could be created and used. -For example, suppose the URL for Network Geographics content encoded both the document GUID and a referral key. :: +For example, suppose the URL for Network Geographics content encoded both the +document GUID and a referral key. :: http://network-geographics-farm-1.com/doc/W39WaGTPnvg.2511635.UQB_zCc8B8H -We don't want to the same content for every possible referrer. Instead we could use a plugin to convert this to the previous example and requests that differed only in the referrer key would all reference the same cache entry. Note that we would also map :: +We don't want to serve the same content for every possible referrer. Instead, +we could use a plugin to convert this to the previous example and requests that +differed only in the referrer key would all reference the same cache entry. +Note that we would also map the following to the same cache key :: http://network-geographics-farm-56.com/doc/W39WaGTPnvg.2511635.UQB_zCc8B8H -to the same cache key. This can be handy for "sharing" content between servers when that content is identical. Note also the plugin can change the cache key or not depending on any data in the request header, for instance not changing the cache key if the request is not in the ``doc`` directory. If distinguishing servers is important that can easily be pulled from the request URL and used in the synthetic cache key. The implementor is free to extract all relevant elements for use in the cache key. - -While there is explicit no requirement that the synthetic cache key be based on the HTTP request header, in practice it is generally necessary due to the consistency requirement. Because cache lookup happens before attempting to connect to the origin server no data from the HTTP response header is available, leaving only the request header. The most common case is the one described above where the goal is to elide elements of the URL that do not affect the content to minimize cache footprint and improve cache hit rates. +This can be handy for sharing content between servers when that content is +identical. Plugins can change the cache key, or not, depending on any data in +the request header. For instance, not changing the cache key if the request is +not in the ``doc`` directory. If distinguishing servers is important, that can +easily be pulled from the request URL and used in the synthetic cache key. The +implementor is free to extract all relevant elements for use in the cache key. + +While there is no explicit requirement that the synthetic cache key be based on +the HTTP request header, in practice it is generally necessary due to the +consistency requirement. Because cache lookup happens before attempting to +connect to the :term:`origin server`, no data from the HTTP response header is +available, leaving only the request header. The most common case is the one +described above where the goal is to elide elements of the URL that do not +affect the content to minimize cache footprint and improve cache hit rates.
