http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
 
b/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
index 54cf174..76b1248 100644
--- 
a/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
+++ 
b/geode-docs/developing/events/implementing_write_behind_event_handler.html.md.erb
@@ -25,13 +25,13 @@ An `AsyncEventListener` asynchronously processes batches of 
events after they ha
 
 An `AsyncEventListener` instance is serviced by its own dedicated thread in 
which a callback method is invoked. Events that update a region are placed in 
an internal `AsyncEventQueue`, and one or more threads dispatch batches of 
events at a time to the listener implementation.
 
-You can configure an `AsyncEventQueue` to be either serial or parallel. A 
serial queue is deployed to one Geode member, and it delivers all of a region's 
events, in order of occurrence, to a configured `AsyncEventListener` 
implementation. A parallel queue is deployed to multiple Geode members, and 
each instance of the queue delivers region events, possibly simultaneously, to 
a local `AsyncEventListener` implementation.
+You can configure an `AsyncEventQueue` to be either serial or parallel. A 
serial queue is deployed to one <%=vars.product_name%> member, and it delivers 
all of a region's events, in order of occurrence, to a configured 
`AsyncEventListener` implementation. A parallel queue is deployed to multiple 
<%=vars.product_name%> members, and each instance of the queue delivers region 
events, possibly simultaneously, to a local `AsyncEventListener` implementation.
 
-While a parallel queue provides the best throughput for writing events, it 
provides less control for ordering those events. With a parallel queue, you 
cannot preserve event ordering for a region as a whole because multiple Geode 
servers queue and deliver the region's events at the same time. However, the 
ordering of events for a given partition (or for a given queue of a distributed 
region) can be preserved.
+While a parallel queue provides the best throughput for writing events, it 
provides less control for ordering those events. With a parallel queue, you 
cannot preserve event ordering for a region as a whole because multiple 
<%=vars.product_name%> servers queue and deliver the region's events at the 
same time. However, the ordering of events for a given partition (or for a 
given queue of a distributed region) can be preserved.
 
 For both serial and parallel queues, you can control the maximum amount of 
memory that each queue uses, as well as the batch size and frequency for 
processing batches in the queue. You can also configure queues to persist to 
disk (instead of simply overflowing to disk) so that write-behind caching can 
pick up where it left off when a member shuts down and is later restarted.
 
-Optionally, a queue can use multiple threads to dispatch queued events. When 
you configure multiple threads for a serial queue, the logical queue that is 
hosted on a Geode member is divided into multiple physical queues, each with a 
dedicated dispatcher thread. You can then configure whether the threads 
dispatch queued events by key, by thread, or in the same order in which events 
were added to the queue. When you configure multiple threads for a parallel 
queue, each queue hosted on a Geode member is processed by dispatcher threads; 
the total number of queues created depends on the number of members that host 
the region.
+Optionally, a queue can use multiple threads to dispatch queued events. When 
you configure multiple threads for a serial queue, the logical queue that is 
hosted on a <%=vars.product_name%> member is divided into multiple physical 
queues, each with a dedicated dispatcher thread. You can then configure whether 
the threads dispatch queued events by key, by thread, or in the same order in 
which events were added to the queue. When you configure multiple threads for a 
parallel queue, each queue hosted on a <%=vars.product_name%> member is 
processed by dispatcher threads; the total number of queues created depends on 
the number of members that host the region.
 
 A `GatewayEventFilter` can be placed on the `AsyncEventQueue` to control 
whether a particular event is sent to a selected `AsyncEventListener`. For 
example, events associated with sensitive data could be detected and not 
queued. For more detail, see the Javadocs for `GatewayEventFilter`.
 
@@ -61,11 +61,11 @@ Review the following guidelines before using an 
AsyncEventListener:
 
 -   If you use an `AsyncEventListener` to implement a write-behind cache 
listener, your code should check for the possibility that an existing database 
connection may have been closed due to an earlier exception. For example, check 
for `Connection.isClosed()` in a catch block and re-create the connection as 
needed before performing further operations.
 -   Use a serial `AsyncEventQueue` if you need to preserve the order of region 
events within a thread when delivering events to your listener implementation. 
Use parallel queues when the order of events within a thread is not important, 
and when you require maximum throughput for processing events. In both cases, 
serial and parallel, the order of operations on a given key is preserved within 
the scope of the thread.
--   You must install the `AsyncEventListener` implementation on a Geode member 
that hosts the region whose events you want to process.
--   If you configure a parallel `AsyncEventQueue`, deploy the queue on each 
Geode member that hosts the region.
+-   You must install the `AsyncEventListener` implementation on a 
<%=vars.product_name%> member that hosts the region whose events you want to 
process.
+-   If you configure a parallel `AsyncEventQueue`, deploy the queue on each 
<%=vars.product_name%> member that hosts the region.
 -   You can install a listener on more than one member to provide high 
availability and guarantee delivery for events, in the event that a member with 
the active `AsyncEventListener` shuts down. At any given time only one member 
has an active listener for dispatching events. The listeners on other members 
remain on standby for redundancy. For best performance and most efficient use 
of memory, install only one standby listener (redundancy of at most one).
 -   Install no more than one standby listener (redundancy of at most one) for 
performance and memory reasons.
--   To preserve pending events through member shutdowns, configure Geode to 
persist the internal queue of the `AsyncEventListener` to an available disk 
store. By default, any pending events that reside in the internal queue of an 
`AsyncEventListener` are lost if the active listener's member shuts down.
+-   To preserve pending events through member shutdowns, configure 
<%=vars.product_name%> to persist the internal queue of the 
`AsyncEventListener` to an available disk store. By default, any pending events 
that reside in the internal queue of an `AsyncEventListener` are lost if the 
active listener's member shuts down.
 -   To ensure high availability and reliable delivery of events, configure the 
event queue to be both persistent and redundant.
 
 ## <a 
id="implementing_write_behind_cache_event_handling__section_FB3EB382E37945D9895E09B47A64D6B9"
 class="no-quick-link"></a>Implementing an AsyncEventListener
@@ -94,7 +94,7 @@ class MyAsyncEventListener implements AsyncEventListener {
 
 ## <a 
id="implementing_write_behind_cache_event_handling__section_AB80262CFB6D4867B52A5D6D880A5294"
 class="no-quick-link"></a>Processing AsyncEvents
 
-Use the 
[AsyncEventListener.processEvents](/releases/latest/javadoc/org/apache/geode/cache/asyncqueue/AsyncEventListener.html)
 method to process AsyncEvents. This method is called asynchronously when 
events are queued to be processed. The size of the list reflects the number of 
batch events where batch size is defined in the AsyncEventQueueFactory. The 
`processEvents` method returns a boolean; true if the AsyncEvents are processed 
correctly, and false if any events fail processing. As long as `processEvents` 
returns false, Geode continues to re-try processing the events.
+Use the 
[AsyncEventListener.processEvents](/releases/latest/javadoc/org/apache/geode/cache/asyncqueue/AsyncEventListener.html)
 method to process AsyncEvents. This method is called asynchronously when 
events are queued to be processed. The size of the list reflects the number of 
batch events where batch size is defined in the AsyncEventQueueFactory. The 
`processEvents` method returns a boolean; true if the AsyncEvents are processed 
correctly, and false if any events fail processing. As long as `processEvents` 
returns false, <%=vars.product_name%> continues to re-try processing the events.
 
 You can use the `getDeserializedValue` method to obtain cache values for 
entries that have been updated or created. Since the `getDeserializedValue` 
method will return a null value for destroyed entries, you should use the 
`getKey` method to obtain references to cache objects that have been destroyed. 
Here's an example of processing AsyncEvents:
 
@@ -188,11 +188,11 @@ To configure a write-behind cache listener, you first 
configure an asynchronous
     AsyncEventQueue asyncQueue = factory.create("sampleQueue", listener);
     ```
 
-2.  If you are using a parallel `AsyncEventQueue`, the gfsh example above 
requires no alteration, as gfsh applies to all members. If using cache.xml or 
the Java API to configure your `AsyncEventQueue`, repeat the above 
configuration in each Geode member that will host the region. Use the same ID 
and configuration settings for each queue configuration.
+2.  If you are using a parallel `AsyncEventQueue`, the gfsh example above 
requires no alteration, as gfsh applies to all members. If using cache.xml or 
the Java API to configure your `AsyncEventQueue`, repeat the above 
configuration in each <%=vars.product_name%> member that will host the region. 
Use the same ID and configuration settings for each queue configuration.
     **Note:**
     You can ensure other members use the sample configuration by using the 
cluster configuration service available in gfsh. See [Overview of the Cluster 
Configuration Service](../../configuring/cluster_config/gfsh_persist.html).
 
-3.  On each Geode member that hosts the `AsyncEventQueue`, assign the queue to 
each region that you want to use with the `AsyncEventListener` implementation.
+3.  On each <%=vars.product_name%> member that hosts the `AsyncEventQueue`, 
assign the queue to each region that you want to use with the 
`AsyncEventListener` implementation.
 
     **gfsh Configuration**
 
@@ -234,7 +234,7 @@ To configure a write-behind cache listener, you first 
configure an asynchronous
     mutator.addAsyncEventQueueId("sampleQueue");        
     ```
 
-    See the [Geode API 
documentation](/releases/latest/javadoc/org/apache/geode/cache/AttributesMutator.html)
 for more information.
+    See the [<%=vars.product_name%> API 
documentation](/releases/latest/javadoc/org/apache/geode/cache/AttributesMutator.html)
 for more information.
 
 4.  Optionally configure persistence and conflation for the queue.
     **Note:**

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb 
b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
index cda18ee..a1e763a 100644
--- a/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
+++ b/geode-docs/developing/events/list_of_event_handlers_and_events.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-Geode provides many types of events and event handlers to help you manage your 
different data and application needs.
+<%=vars.product_name%> provides many types of events and event handlers to 
help you manage your different data and application needs.
 
 ## <a id="event_handlers_and_events__section_E7B7502F673B43E794884D0F6BF537CF" 
class="no-quick-link"></a>Event Handlers
 
@@ -79,7 +79,7 @@ Use either cache handlers or membership handlers in any 
single application. Do n
 <td><code class="ph codeph">MembershipListener</code>
 <p>(org.apache.geode.management .membership.MembershipListener)</p></td>
 <td><code class="ph codeph">MembershipEvent</code></td>
-<td>Use this interface to receive membership events only about peers. This 
listener's callback methods are invoked when peer members join or leave the 
Geode distributed system. Callback methods include <code class="ph 
codeph">memberCrashed</code>, <code class="ph codeph">memberJoined</code>, and 
<code class="ph codeph">memberLeft</code> (graceful exit).</td>
+<td>Use this interface to receive membership events only about peers. This 
listener's callback methods are invoked when peer members join or leave the 
<%=vars.product_name%> distributed system. Callback methods include <code 
class="ph codeph">memberCrashed</code>, <code class="ph 
codeph">memberJoined</code>, and <code class="ph codeph">memberLeft</code> 
(graceful exit).</td>
 </tr>
 <tr>
 <td><code class="ph codeph">RegionMembershipListener</code></td>
@@ -151,7 +151,7 @@ The events in this table are cache events unless otherwise 
noted.
 <tr>
 <td><code class="ph codeph">EntryEvent</code></td>
 <td><code class="ph codeph">CacheListener</code>, <code class="ph 
codeph">CacheWriter</code>, <code class="ph codeph">TransactionListener</code> 
(inside the <code class="ph codeph">TransactionEvent</code>)</td>
-<td>Extends <code class="ph codeph">CacheEvent</code> for entry events. 
Contains information about an event affecting a data entry in the cache. The 
information includes the key, the value before this event, and the value after 
this event. <code class="ph codeph">EntryEvent.getNewValue</code> returns the 
current value of the data entry. <code class="ph 
codeph">EntryEvent.getOldValue</code> returns the value before this event if it 
is available. For a partitioned region, returns the old value if the local 
cache holds the primary copy of the entry. <code class="ph 
codeph">EntryEvent</code> provides the Geode transaction ID if available.
+<td>Extends <code class="ph codeph">CacheEvent</code> for entry events. 
Contains information about an event affecting a data entry in the cache. The 
information includes the key, the value before this event, and the value after 
this event. <code class="ph codeph">EntryEvent.getNewValue</code> returns the 
current value of the data entry. <code class="ph 
codeph">EntryEvent.getOldValue</code> returns the value before this event if it 
is available. For a partitioned region, returns the old value if the local 
cache holds the primary copy of the entry. <code class="ph 
codeph">EntryEvent</code> provides the <%=vars.product_name%> transaction ID if 
available.
 <p>You can retrieve serialized values from <code class="ph 
codeph">EntryEvent</code> using the <code class="ph 
codeph">getSerialized</code>* methods. This is useful if you get values from 
one region’s events just to put them into a separate cache region. There is 
no counterpart <code class="ph codeph">put</code> function as the put 
recognizes that the value is serialized and bypasses the serialization 
step.</p></td>
 </tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb 
b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
index db43ed5..06e14b1 100644
--- 
a/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
+++ 
b/geode-docs/developing/events/tune_client_server_event_messaging.html.md.erb
@@ -28,10 +28,10 @@ A single client thread receives and processes messages from 
the server, tracking
 
 The client’s message tracking list holds the highest sequence ID of any 
message received for each originating thread. The list can become quite large 
in systems where there are many different threads coming and going and doing 
work on the cache. After a thread dies, its tracking entry is not needed. To 
avoid maintaining tracking information for threads that have died, the client 
expires entries that have had no activity for more than the 
`subscription-message-tracking-timeout`.
 
--   **[Conflate the Server Subscription 
Queue](../../developing/events/conflate_server_subscription_queue.html)**
+-   **[Conflate the Server Subscription 
Queue](conflate_server_subscription_queue.html)**
 
--   **[Limit the Server's Subscription Queue Memory 
Use](../../developing/events/limit_server_subscription_queue_size.html)**
+-   **[Limit the Server's Subscription Queue Memory 
Use](limit_server_subscription_queue_size.html)**
 
--   **[Tune the Client's Subscription Message Tracking 
Timeout](../../developing/events/tune_client_message_tracking_timeout.html)**
+-   **[Tune the Client's Subscription Message Tracking 
Timeout](tune_client_message_tracking_timeout.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
 
b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
index 7b201bc..56a3b12 100644
--- 
a/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
+++ 
b/geode-docs/developing/events/writing_callbacks_that_modify_the_cache.html.md.erb
@@ -23,20 +23,20 @@ Event handlers are synchronous. If you need to change the 
cache or perform any o
 
 ## <a 
id="writing_callbacks_that_modify_the_cache__section_98E49363C91945DEB0A3B2FD9A209969"
 class="no-quick-link"></a>Operations to Avoid in Event Handlers
 
-Do not perform distributed operations of any kind directly from your event 
handler. Geode is a highly distributed system and many operations that may seem 
local invoke distributed operations.
+Do not perform distributed operations of any kind directly from your event 
handler. <%=vars.product_name%> is a highly distributed system and many 
operations that may seem local invoke distributed operations.
 
 These are common distributed operations that can get you into trouble:
 
 -   Calling `Region` methods, on the event's region or any other region.
--   Using the Geode `DistributedLockService`.
+-   Using the <%=vars.product_name%> `DistributedLockService`.
 -   Modifying region attributes.
--   Executing a function through the Geode `FunctionService`.
+-   Executing a function through the <%=vars.product_name%> `FunctionService`.
 
-To be on the safe side, do not make any calls to the Geode API directly from 
your event handler. Make all Geode API calls from within a separate thread or 
executor.
+To be on the safe side, do not make any calls to the <%=vars.product_name%> 
API directly from your event handler. Make all <%=vars.product_name%> API calls 
from within a separate thread or executor.
 
 ## <a 
id="writing_callbacks_that_modify_the_cache__section_78648D4177E14EA695F0B059E336137C"
 class="no-quick-link"></a>How to Perform Distributed Operations Based on Events
 
-If you need to use the Geode API from your handlers, make your work 
asynchronous to the event handler. You can spawn a separate thread or use a 
solution like the `java.util.concurrent.Executor` interface.
+If you need to use the <%=vars.product_name%> API from your handlers, make 
your work asynchronous to the event handler. You can spawn a separate thread or 
use a solution like the `java.util.concurrent.Executor` interface.
 
 This example shows a serial executor where the callback creates a `Runnable` 
that can be pulled off a queue and run by another object. This preserves the 
ordering of events.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/eviction/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/chapter_overview.html.md.erb 
b/geode-docs/developing/eviction/chapter_overview.html.md.erb
index c5d1417..1cd9814 100644
--- a/geode-docs/developing/eviction/chapter_overview.html.md.erb
+++ b/geode-docs/developing/eviction/chapter_overview.html.md.erb
@@ -23,11 +23,11 @@ Use eviction to control data region size.
 
 <a id="eviction__section_C3409270DD794822B15E819E2276B21A"></a>
 
--   **[How Eviction Works](../../developing/eviction/how_eviction_works.html)**
+-   **[How Eviction Works](how_eviction_works.html)**
 
-    Eviction settings cause Apache Geode to work to keep a region's resource 
use under a specified level by removing least recently used (LRU) entries to 
make way for new entries.
+    Eviction settings cause <%=vars.product_name_long%> to work to keep a 
region's resource use under a specified level by removing least recently used 
(LRU) entries to make way for new entries.
 
--   **[Configure Data 
Eviction](../../developing/eviction/configuring_data_eviction.html)**
+-   **[Configure Data Eviction](configuring_data_eviction.html)**
 
     Use eviction controllers to configure the eviction-attributes region 
attribute settings to keep your region within a specified limit.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb 
b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
index 6c22284..530c22f 100644
--- a/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
+++ b/geode-docs/developing/eviction/configuring_data_eviction.html.md.erb
@@ -22,21 +22,21 @@ limitations under the License.
 Use eviction controllers to configure the eviction-attributes region attribute 
settings to keep your region within a specified limit.
 
 <a 
id="configuring_data_eviction__section_8515EC9635C342C0916EE9E6120E2AC9"></a>
-Eviction controllers monitor region and memory use and, when the limit is 
reached, remove older entries to make way for new data. For heap percentage, 
the controller used is the Geode resource manager, configured in conjunction 
with the JVM's garbage collector for optimum performance.
+Eviction controllers monitor region and memory use and, when the limit is 
reached, remove older entries to make way for new data. For heap percentage, 
the controller used is the <%=vars.product_name%> resource manager, configured 
in conjunction with the JVM's garbage collector for optimum performance.
 
 Configure data eviction as follows. You do not need to perform these steps in 
the sequence shown.
 
 1.  Decide whether to evict based on:
     -   Entry count (useful if your entry sizes are relatively uniform).
     -   Total bytes used. In partitioned regions, this is set using 
`local-max-memory`. In non-partitioned, it is set in `eviction-attributes`.
-    -   Percentage of application heap used. This uses the Geode resource 
manager. When the manager determines that eviction is required, the manager 
orders the eviction controller to start evicting from all regions where the 
eviction algorithm is set to `lru-heap-percentage`. Eviction continues until 
the manager calls a halt. Geode evicts the least recently used entry hosted by 
the member for the region. See [Managing Heap and Off-heap 
Memory](../../managing/heap_use/heap_management.html#resource_manager).
+    -   Percentage of application heap used. This uses the 
<%=vars.product_name%> resource manager. When the manager determines that 
eviction is required, the manager orders the eviction controller to start 
evicting from all regions where the eviction algorithm is set to 
`lru-heap-percentage`. Eviction continues until the manager calls a halt. 
<%=vars.product_name%> evicts the least recently used entry hosted by the 
member for the region. See [Managing Heap and Off-heap 
Memory](../../managing/heap_use/heap_management.html#resource_manager).
 
 2.  Decide what action to take when the limit is reached:
     -   Locally destroy the entry.
     -   Overflow the entry data to disk. See [Persistence and 
Overflow](../storing_data_on_disk/chapter_overview.html).
 
 3.  Decide the maximum amount of data to allow in the member for the eviction 
measurement indicated. This is the maximum for all storage for the region in 
the member. For partitioned regions, this is the total for all buckets stored 
in the member for the region - including any secondary buckets used for 
redundancy.
-4.  Decide whether to program a custom sizer for your region. If you are able 
to provide such a class, it might be faster than the standard sizing done by 
Geode. Your custom class must follow the guidelines for defining custom classes 
and, additionally, must implement `org.apache.geode.cache.util.ObjectSizer`. 
See [Requirements for Using Custom Classes in Data 
Caching](../../basic_config/data_entries_custom_classes/using_custom_classes.html).
+4.  Decide whether to program a custom sizer for your region. If you are able 
to provide such a class, it might be faster than the standard sizing done by 
<%=vars.product_name%>. Your custom class must follow the guidelines for 
defining custom classes and, additionally, must implement 
`org.apache.geode.cache.util.ObjectSizer`. See [Requirements for Using Custom 
Classes in Data 
Caching](../../basic_config/data_entries_custom_classes/using_custom_classes.html).
 
 **Note:**
 You can also configure Regions using the gfsh command-line interface, however, 
you cannot configure `eviction-attributes` using gfsh. See [Region 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_EF03119A40EE492984F3B6248596E1DD)
 and [Disk Store 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/eviction/how_eviction_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/eviction/how_eviction_works.html.md.erb 
b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
index a714253..0c11f0b 100644
--- a/geode-docs/developing/eviction/how_eviction_works.html.md.erb
+++ b/geode-docs/developing/eviction/how_eviction_works.html.md.erb
@@ -19,16 +19,16 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-Eviction settings cause Apache Geode to work to keep a region's resource use 
under a specified level by removing least recently used (LRU) entries to make 
way for new entries.
+Eviction settings cause <%=vars.product_name_long%> to work to keep a region's 
resource use under a specified level by removing least recently used (LRU) 
entries to make way for new entries.
 
 <a id="how_eviction_works__section_C3409270DD794822B15E819E2276B21A"></a>
 You configure for eviction based on entry count, percentage of available heap, 
and absolute memory usage. You also configure what to do when you need to 
evict: destroy entries or overflow them to disk. See [Persistence and 
Overflow](../storing_data_on_disk/chapter_overview.html).
 
-When Geode determines that adding or updating an entry would take the region 
over the specified level, it overflows or removes enough older entries to make 
room. For entry count eviction, this means a one-to-one trade of an older entry 
for the newer one. For the memory settings, the number of older entries that 
need to be removed to make space depends entirely on the relative sizes of the 
older and newer entries.
+When <%=vars.product_name%> determines that adding or updating an entry would 
take the region over the specified level, it overflows or removes enough older 
entries to make room. For entry count eviction, this means a one-to-one trade 
of an older entry for the newer one. For the memory settings, the number of 
older entries that need to be removed to make space depends entirely on the 
relative sizes of the older and newer entries.
 
 ## <a id="how_eviction_works__section_69E2AA453EDE4E088D1C3332C071AFE1" 
class="no-quick-link"></a>Eviction in Partitioned Regions
 
-In partitioned regions, Geode removes the oldest entry it can find *in the 
bucket where the new entry operation is being performed*. Geode maintains LRU 
entry information on a bucket-by-bucket basis, as the cost of maintaining 
information across the partitioned region would be too great a performance hit.
+In partitioned regions, <%=vars.product_name%> removes the oldest entry it can 
find *in the bucket where the new entry operation is being performed*. 
<%=vars.product_name%> maintains LRU entry information on a bucket-by-bucket 
basis, as the cost of maintaining information across the partitioned region 
would be too great a performance hit.
 
 -   For memory and entry count eviction, LRU eviction is done in the bucket 
where the new entry operation is being performed until the overall size of the 
combined buckets in the member has dropped enough to perform the operation 
without going over the limit.
 -   For heap eviction, each partitioned region bucket is treated as if it were 
a separate region, with each eviction action only considering the LRU for the 
bucket, and not the partitioned region as a whole.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/expiration/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/chapter_overview.html.md.erb 
b/geode-docs/developing/expiration/chapter_overview.html.md.erb
index 546af32..3764b6f 100644
--- a/geode-docs/developing/expiration/chapter_overview.html.md.erb
+++ b/geode-docs/developing/expiration/chapter_overview.html.md.erb
@@ -21,11 +21,11 @@ limitations under the License.
 
 Use expiration to keep data current by removing stale entries. You can also 
use it to remove entries you are not using so your region uses less space. 
Expired entries are reloaded the next time they are requested.
 
--   **[How Expiration 
Works](../../developing/expiration/how_expiration_works.html)**
+-   **[How Expiration Works](how_expiration_works.html)**
 
     Expiration removes old entries and entries that you are not using. You can 
destroy or invalidate entries.
 
--   **[Configure Data 
Expiration](../../developing/expiration/configuring_data_expiration.html)**
+-   **[Configure Data Expiration](configuring_data_expiration.html)**
 
     Configure the type of expiration and the expiration action to use.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/expiration/how_expiration_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/expiration/how_expiration_works.html.md.erb 
b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
index 4ec5015..e005581 100644
--- a/geode-docs/developing/expiration/how_expiration_works.html.md.erb
+++ b/geode-docs/developing/expiration/how_expiration_works.html.md.erb
@@ -30,14 +30,14 @@ This figure shows two basic expiration settings for a 
producer/consumer system.
 
 ## <a id="how_expiration_works__section_B6C55A610F4243ED8F1986E8A98858CF" 
class="no-quick-link"></a>Expiration Types
 
-Apache Geode uses the following expiration types:
+<%=vars.product_name_long%> uses the following expiration types:
 
 -   **Time to live (TTL)**. The amount of time, in seconds, the object may 
remain in the cache after the last creation or update. For entries, the counter 
is set to zero for create and put operations. Region counters are reset when 
the region is created and when an entry has its counter reset. The TTL 
expiration attributes are `region-time-to-live` and `entry-time-to-live`.
 -   **Idle timeout**. The amount of time, in seconds, the object may remain in 
the cache after the last access. The idle timeout counter for an object is 
reset any time its TTL counter is reset. In addition, an entry’s idle timeout 
counter is reset any time the entry is accessed through a get operation or a 
netSearch . The idle timeout counter for a region is reset whenever the idle 
timeout is reset for one of its entries. Idle timeout expiration attributes 
are: `region-idle-time` and `entry-idle-time`.
 
 ## <a id="how_expiration_works__section_BA995343EF584104B9853CFE4CAD88AD" 
class="no-quick-link"></a>Expiration Actions
 
-Apache Geode uses the following expiration actions:
+<%=vars.product_name_long%> uses the following expiration actions:
 
 -   destroy
 -   local destroy

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/function_exec/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/chapter_overview.html.md.erb 
b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
index c85e9c8..46d39f8 100644
--- a/geode-docs/developing/function_exec/chapter_overview.html.md.erb
+++ b/geode-docs/developing/function_exec/chapter_overview.html.md.erb
@@ -31,6 +31,6 @@ A function is a body of code that resides on a server and 
that an application ca
 
 -   **[How Function Execution Works](how_function_execution_works.html)**
 
--   **[Executing a Function in Apache Geode](function_execution.html)**
+-   **[Executing a Function in 
<%=vars.product_name_long%>](function_execution.html)**
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/function_exec/function_execution.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/function_exec/function_execution.html.md.erb 
b/geode-docs/developing/function_exec/function_execution.html.md.erb
index 221098b..a7ce138 100644
--- a/geode-docs/developing/function_exec/function_execution.html.md.erb
+++ b/geode-docs/developing/function_exec/function_execution.html.md.erb
@@ -1,6 +1,4 @@
----
-title:  Executing a Function in Apache Geode
----
+<% set_title("Executing a Function in", product_name_long) %>
 
 <!--
 Licensed to the Apache Software Foundation (ASF) under one or more
@@ -37,7 +35,7 @@ Code the methods you need for the function. These steps do 
not have to be done i
 
 1.  Code `getId` to return a unique name for your function. You can use this 
name to access the function through the `FunctionService` API.
 2.  For high availability:
-    1.  Code `isHa` to return true to indicate to Geode that it can re-execute 
your function after one or more members fails
+    1.  Code `isHa` to return true to indicate to <%=vars.product_name%> that 
it can re-execute your function after one or more members fails
     2.  Code your function to return a result
     3.  Code `hasResult` to return true
 
@@ -57,7 +55,7 @@ Code the methods you need for the function. These steps do 
not have to be done i
             **Note:**
             When you use `PartitionRegionHelper.getLocalDataForContext`, 
`putIfAbsent` may not return expected results if you are working on local data 
set instead of the region.
 
-    4.  To propagate an error condition or exception back to the caller of the 
function, throw a FunctionException from the `execute` method. Geode transmits 
the exception back to the caller as if it had been thrown on the calling side. 
See the Java API documentation for 
[FunctionException](/releases/latest/javadoc/org/apache/geode/cache/execute/FunctionException.html)
 for more information.
+    4.  To propagate an error condition or exception back to the caller of the 
function, throw a FunctionException from the `execute` method. 
<%=vars.product_name%> transmits the exception back to the caller as if it had 
been thrown on the calling side. See the Java API documentation for 
[FunctionException](/releases/latest/javadoc/org/apache/geode/cache/execute/FunctionException.html)
 for more information.
 
 Example function code:
 
@@ -114,7 +112,7 @@ When you deploy a JAR file that contains a Function (in 
other words, contains a
 To register a function by using `gfsh`:
 
 1.  Package your class files into a JAR file.
-2.  Start a `gfsh` prompt. If necessary, start a Locator and connect to the 
Geode distributed system where you want to run the function.
+2.  Start a `gfsh` prompt. If necessary, start a Locator and connect to the 
<%=vars.product_name%> distributed system where you want to run the function.
 3.  At the gfsh prompt, type the following command:
 
     ``` pre
@@ -125,7 +123,7 @@ To register a function by using `gfsh`:
 
 If another JAR file is deployed (either with the same JAR filename or another 
filename) with the same Function, the new implementation of the Function will 
be registered, overwriting the old one. If a JAR file is undeployed, any 
Functions that were auto-registered at the time of deployment will be 
unregistered. Since deploying a JAR file that has the same name multiple times 
results in the JAR being un-deployed and re-deployed, Functions in the JAR will 
be unregistered and re-registered each time this occurs. If a Function with the 
same ID is registered from multiple differently named JAR files, the Function 
will be unregistered if either of those JAR files is re-deployed or un-deployed.
 
-See [Deploying Application JARs to Apache Geode 
Members](../../configuring/cluster_config/deploying_application_jars.html#concept_4436C021FB934EC4A330D27BD026602C)
 for more details on deploying JAR files.
+See [Deploying Application JARs to <%=vars.product_name_long%> 
Members](../../configuring/cluster_config/deploying_application_jars.html#concept_4436C021FB934EC4A330D27BD026602C)
 for more details on deploying JAR files.
 
 ## <a id="function_execution__section_1D1056F843044F368FB76F47061FCD50" 
class="no-quick-link"></a>Register the Function Programmatically
 
@@ -169,7 +167,7 @@ In every member where you want to explicitly execute the 
function and process th
 **Running the Function Using gfsh**
 
 1.  Start a gfsh prompt.
-2.  If necessary, start a Locator and connect to the Geode distributed system 
where you want to run the function.
+2.  If necessary, start a Locator and connect to the <%=vars.product_name%> 
distributed system where you want to run the function.
 3.  At the gfsh prompt, type the following command:
 
     ``` pre
@@ -228,12 +226,12 @@ ResultCollector rc = execution.execute(function);
 List result = (List)rc.getResult();
 ```
 
-Geode’s default `ResultCollector` collects all results into an `ArrayList`. 
Its `getResult` methods block until all results are received. Then they return 
the full result set.
+<%=vars.product_name%>’s default `ResultCollector` collects all results into 
an `ArrayList`. Its `getResult` methods block until all results are received. 
Then they return the full result set.
 
 To customize results collecting:
 
 1.  Write a class that extends `ResultCollector` and code the methods to store 
and retrieve the results as you need. Note that the methods are of two types:
-    1.  `addResult` and `endResults` are called by Geode when results arrive 
from the `Function` instance `SendResults` methods
+    1.  `addResult` and `endResults` are called by <%=vars.product_name%> when 
results arrive from the `Function` instance `SendResults` methods
     2.  `getResult` is available to your executing application (the one that 
calls `Execution.execute`) to retrieve the results
 
 2.  Use high availability for `onRegion` functions that have been coded for it:

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb 
b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
index a72045f..ae80b01 100644
--- 
a/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
+++ 
b/geode-docs/developing/function_exec/how_function_execution_works.html.md.erb
@@ -21,7 +21,7 @@ limitations under the License.
 
 ## <a 
id="how_function_execution_works__section_881D2FF6761B4D689DDB46C650E2A2E1" 
class="no-quick-link"></a>Where Functions Are Executed
 
-You can execute data-independent functions or data-dependent functions in 
Geode in the following places:
+You can execute data-independent functions or data-dependent functions in 
<%=vars.product_name%> in the following places:
 
 **For Data-independent Functions**
 
@@ -39,13 +39,13 @@ See the `org.apache.geode.cache.execute.FunctionService` 
Java API documentation
 
 The following things occur when executing a function:
 
-1.  When you call the `execute` method on the `Execution` object, Geode 
invokes the function on all members where it needs to run. The locations are 
determined by the `FunctionService` `on*` method calls, region configuration, 
and any filters.
+1.  When you call the `execute` method on the `Execution` object, 
<%=vars.product_name%> invokes the function on all members where it needs to 
run. The locations are determined by the `FunctionService` `on*` method calls, 
region configuration, and any filters.
 2.  If the function has results, they are returned to the `addResult` method 
call in a `ResultCollector` object.
 3.  The originating member collects results using `ResultCollector.getResult`.
 
 ## <a 
id="how_function_execution_works__section_14FF9932C7134C5584A14246BB4D4FF6" 
class="no-quick-link"></a>Highly Available Functions
 
-Generally, function execution errors are returned to the calling application. 
You can code for high availability for `onRegion` functions that return a 
result, so Geode automatically retries a function if it does not execute 
successfully. You must code and configure the function to be highly available, 
and the calling application must invoke the function using the results 
collector `getResult` method.
+Generally, function execution errors are returned to the calling application. 
You can code for high availability for `onRegion` functions that return a 
result, so <%=vars.product_name%> automatically retries a function if it does 
not execute successfully. You must code and configure the function to be highly 
available, and the calling application must invoke the function using the 
results collector `getResult` method.
 
 When a failure (such as an execution error or member crash while executing) 
occurs, the system responds by:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb 
b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
index a008ede..3d8c30c 100644
--- a/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
+++ b/geode-docs/developing/outside_data_sources/chapter_overview.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-Apache Geode has application plug-ins to read data into the cache and write it 
out.
+<%=vars.product_name_long%> has application plug-ins to read data into the 
cache and write it out.
 
 <a id="outside_data_sources__section_100B707BB812430E8D9CFDE3BE4698D1"></a>
 The application plug-ins:

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb 
b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
index 4f309a0..b342e41 100644
--- 
a/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
+++ 
b/geode-docs/developing/outside_data_sources/how_data_loaders_work.html.md.erb
@@ -24,7 +24,7 @@ By default, a region has no data loader defined. Plug an 
application-defined loa
 <a id="how_data_loaders_work__section_1E600469D223498DB49446434CE9B0B4"></a>
 The loader is called on cache misses during get operations, and it populates 
the cache with the new entry value in addition to returning the value to the 
calling thread.
 
-A loader can be configured to load data into the Geode cache from an outside 
data store. To do the reverse operation, writing data from the Geode cache to 
an outside data store, use a cache writer event handler. See [Implementing 
Cache Event Handlers](../events/implementing_cache_event_handlers.html).
+A loader can be configured to load data into the <%=vars.product_name%> cache 
from an outside data store. To do the reverse operation, writing data from the 
<%=vars.product_name%> cache to an outside data store, use a cache writer event 
handler. See [Implementing Cache Event 
Handlers](../events/implementing_cache_event_handlers.html).
 
 How to install your cache loader depends on the type of region.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb 
b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
index 728b664..767e507 100644
--- a/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
+++ b/geode-docs/developing/outside_data_sources/sync_outside_data.html.md.erb
@@ -21,15 +21,15 @@ limitations under the License.
 
 Keep your distributed cache in sync with an outside data source by programming 
and installing application plug-ins for your region.
 
--   **[Overview of Outside Data 
Sources](../../developing/outside_data_sources/chapter_overview.html)**
+-   **[Overview of Outside Data Sources](chapter_overview.html)**
 
-    Apache Geode has application plug-ins to read data into the cache and 
write it out.
+    <%=vars.product_name_long%> has application plug-ins to read data into the 
cache and write it out.
 
--   **[How Data Loaders 
Work](../../developing/outside_data_sources/how_data_loaders_work.html)**
+-   **[How Data Loaders Work](how_data_loaders_work.html)**
 
     By default, a region has no data loader defined. Plug an 
application-defined loader into any region by setting the region attribute 
cache-loader on the members that host data for the region.
 
--   **[Implement a Data 
Loader](../../developing/outside_data_sources/implementing_data_loaders.html)**
+-   **[Implement a Data Loader](implementing_data_loaders.html)**
 
     Program a data loader and configure your region to use it.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb 
b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
index e450ee5..0d41532 100644
--- a/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/chapter_overview.html.md.erb
@@ -21,44 +21,44 @@ limitations under the License.
 
 In addition to basic region management, partitioned regions include options 
for high availability, data location control, and data balancing across the 
distributed system.
 
--   **[Understanding 
Partitioning](../../developing/partitioned_regions/how_partitioning_works.html)**
+-   **[Understanding Partitioning](how_partitioning_works.html)**
 
     To use partitioned regions, you should understand how they work and your 
options for managing them.
 
--   **[Configuring Partitioned 
Regions](../../developing/partitioned_regions/managing_partitioned_regions.html)**
+-   **[Configuring Partitioned Regions](managing_partitioned_regions.html)**
 
     Plan the configuration and ongoing management of your partitioned region 
for host and accessor members and configure the regions for startup.
 
--   **[Configuring the Number of Buckets for a Partitioned 
Region](../../developing/partitioned_regions/configuring_bucket_for_pr.html)**
+-   **[Configuring the Number of Buckets for a Partitioned 
Region](configuring_bucket_for_pr.html)**
 
     Decide how many buckets to assign to your partitioned region and set the 
configuration accordingly.
 
--   **[Custom-Partitioning and Colocating 
Data](../../developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html)**
+-   **[Custom-Partitioning and Colocating 
Data](overview_custom_partitioning_and_data_colocation.html)**
 
-    You can customize how Apache Geode groups your partitioned region data 
with custom partitioning and data colocation.
+    You can customize how <%=vars.product_name_long%> groups your partitioned 
region data with custom partitioning and data colocation.
 
--   **[Configuring High Availability for Partitioned 
Regions](../../developing/partitioned_regions/overview_how_pr_ha_works.html)**
+-   **[Configuring High Availability for Partitioned 
Regions](overview_how_pr_ha_works.html)**
 
-    By default, Apache Geode stores only a single copy of your partitioned 
region data among the region's data stores. You can configure Geode to maintain 
redundant copies of your partitioned region data for high availability.
+    By default, <%=vars.product_name_long%> stores only a single copy of your 
partitioned region data among the region's data stores. You can configure 
<%=vars.product_name%> to maintain redundant copies of your partitioned region 
data for high availability.
 
--   **[Configuring Single-Hop Client Access to Server-Partitioned 
Regions](../../developing/partitioned_regions/overview_how_pr_single_hop_works.html)**
+-   **[Configuring Single-Hop Client Access to Server-Partitioned 
Regions](overview_how_pr_single_hop_works.html)**
 
     Single-hop data access enables the client pool to track where a 
partitioned region’s data is hosted in the servers. To access a single entry, 
the client directly contacts the server that hosts the key, in a single hop.
 
--   **[Rebalancing Partitioned Region 
Data](../../developing/partitioned_regions/rebalancing_pr_data.html)**
+-   **[Rebalancing Partitioned Region Data](rebalancing_pr_data.html)**
 
     In a distributed system with minimal contention to the concurrent threads 
reading or updating from the members, you can use rebalancing to dynamically 
increase or decrease your data and processing capacity.
 
-- **[Automated Rebalancing of Partitioned Region 
Data](../../developing/partitioned_regions/automated_rebalance.html)**
+- **[Automated Rebalancing of Partitioned Region 
Data](automated_rebalance.html)**
 
     The automated rebalance feature triggers a rebalance operation
 based on a time schedule.
 
--   **[Checking Redundancy in Partitioned 
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)**
+-   **[Checking Redundancy in Partitioned 
Regions](checking_region_redundancy.html)**
 
     Under some circumstances, it can be important to verify that your 
partitioned region data is redundant and that upon member restart, redundancy 
has been recovered properly across partitioned region members.
 
--   **[Moving Partitioned Region Data to Another 
Member](../../developing/partitioned_regions/moving_partitioned_data.html)**
+-   **[Moving Partitioned Region Data to Another 
Member](moving_partitioned_data.html)**
 
     You can use the `PartitionRegionHelper` `moveBucketByKey` and `moveData` 
methods to explicitly move partitioned region data from one member to another.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
 
b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
index c20e30e..962c21e 100644
--- 
a/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/colocating_partitioned_region_data.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-By default, Geode allocates the data locations for a partitioned region 
independent of the data locations for any other partitioned region. You can 
change this policy for any group of partitioned regions, so that cross-region, 
related data is all hosted by the same member. This colocation speeds queries 
and other operations that access data from the regions.
+By default, <%=vars.product_name%> allocates the data locations for a 
partitioned region independent of the data locations for any other partitioned 
region. You can change this policy for any group of partitioned regions, so 
that cross-region, related data is all hosted by the same member. This 
colocation speeds queries and other operations that access data from the 
regions.
 
 <a 
id="colocating_partitioned_region_data__section_131EC040055E48A6B35E981B5C845A65"></a>
 **Note:**
@@ -39,7 +39,7 @@ Data colocation between partitioned regions generally 
improves the performance o
 **Procedure**
 
 1.  Identify one region as the central region, with which data in the other 
regions is explicitly colocated. If you use persistence for any of the regions, 
you must persist the central region.
-    1.  Create the central region before you create the others, either in the 
cache.xml or your code. Regions in the XML are created before regions in the 
code, so if you create any of your colocated regions in the XML, you must 
create the central region in the XML before the others. Geode will verify its 
existence when the others are created and return `IllegalStateException` if the 
central region is not there. Do not add any colocation specifications to this 
central region.
+    1.  Create the central region before you create the others, either in the 
cache.xml or your code. Regions in the XML are created before regions in the 
code, so if you create any of your colocated regions in the XML, you must 
create the central region in the XML before the others. <%=vars.product_name%> 
will verify its existence when the others are created and return 
`IllegalStateException` if the central region is not there. Do not add any 
colocation specifications to this central region.
     2.  For all other regions, in the region partition attributes, provide the 
central region's name in the `colocated-with` attribute. Use one of these 
methods:
         -   XML:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
 
b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
index ccb7e71..f8dc971 100644
--- 
a/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/configuring_bucket_for_pr.html.md.erb
@@ -22,7 +22,7 @@ limitations under the License.
 Decide how many buckets to assign to your partitioned region and set the 
configuration accordingly.
 
 <a 
id="configuring_total_buckets__section_DF52B2BF467F4DB4B8B3D16A79EFCA39"></a>
-The total number of buckets for the partitioned region determines the 
granularity of data storage and thus how evenly the data can be distributed. 
Geode distributes the buckets as evenly as possible across the data stores. The 
number of buckets is fixed after region creation.
+The total number of buckets for the partitioned region determines the 
granularity of data storage and thus how evenly the data can be distributed. 
<%=vars.product_name%> distributes the buckets as evenly as possible across the 
data stores. The number of buckets is fixed after region creation.
 
 The partition attribute `total-num-buckets` sets the number for the entire 
partitioned region across all participating members. Set it using one of the 
following:
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb 
b/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
index c084f4a..d77006c 100644
--- 
a/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/configuring_ha_for_pr.html.md.erb
@@ -25,33 +25,33 @@ Here are the main steps for configuring high availability 
for a partitioned regi
 
 1.  Set the number of redundant copies the system should maintain of the 
region data. See [Set the Number of Redundant 
Copies](set_pr_redundancy.html#set_pr_redundancy). 
 2.  (Optional) If you want to group your data store members into redundancy 
zones, configure them accordingly. See [Configure Redundancy Zones for 
Members](set_redundancy_zones.html#set_redundancy_zones). 
-3.  (Optional) If you want Geode to only place redundant copies on different 
physical machines, configure for that. See [Set Enforce Unique 
Host](set_enforce_unique_host.html#set_pr_redundancy). 
-4.  Decide how to manage redundancy recovery and change Geode's default 
behavior as needed. 
+3.  (Optional) If you want <%=vars.product_name%> to only place redundant 
copies on different physical machines, configure for that. See [Set Enforce 
Unique Host](set_enforce_unique_host.html#set_pr_redundancy). 
+4.  Decide how to manage redundancy recovery and change 
<%=vars.product_name%>'s default behavior as needed. 
     - **After a member crashes**. If you want automatic redundancy recovery, 
change the configuration for that. See [Configure Member Crash Redundancy 
Recovery for a Partitioned 
Region](set_crash_redundancy_recovery.html#set_crash_redundancy_recovery). 
     - **After a member joins**. If you do *not* want immediate, automatic 
redundancy recovery, change the configuration for that. See [Configure Member 
Join Redundancy Recovery for a Partitioned 
Region](set_join_redundancy_recovery.html#set_join_redundancy_recovery). 
 
-5.  Decide how many buckets Geode should attempt to recover in parallel when 
performing redundancy recovery. By default, the system recovers up to 8 buckets 
in parallel. Use the `gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property 
to increase or decrease the maximum number of buckets to recover in parallel 
any time redundancy recovery is performed.
+5.  Decide how many buckets <%=vars.product_name%> should attempt to recover 
in parallel when performing redundancy recovery. By default, the system 
recovers up to 8 buckets in parallel. Use the 
`gemfire.MAX_PARALLEL_BUCKET_RECOVERIES` system property to increase or 
decrease the maximum number of buckets to recover in parallel any time 
redundancy recovery is performed.
 6.  For all but fixed partitioned regions, review the points at which you kick 
off rebalancing. Redundancy recovery is done automatically at the start of any 
rebalancing. This is most important if you run with no automated recovery after 
member crashes or joins. See [Rebalancing Partitioned Region 
Data](rebalancing_pr_data.html#rebalancing_pr_data). 
 
 During runtime, you can add capacity by adding new members for the region. For 
regions that do not use fixed partitioning, you can also kick off a rebalancing 
operation to spread the region buckets among all members.
 
--   **[Set the Number of Redundant 
Copies](../../developing/partitioned_regions/set_pr_redundancy.html)**
+-   **[Set the Number of Redundant Copies](set_pr_redundancy.html)**
 
     Configure in-memory high availability for your partitioned region by 
specifying the number of secondary copies you want to maintain in the region's 
data stores.
 
--   **[Configure Redundancy Zones for 
Members](../../developing/partitioned_regions/set_redundancy_zones.html)**
+-   **[Configure Redundancy Zones for Members](set_redundancy_zones.html)**
 
-    Group members into redundancy zones so Geode will separate redundant data 
copies into different zones.
+    Group members into redundancy zones so <%=vars.product_name%> will 
separate redundant data copies into different zones.
 
--   **[Set Enforce Unique 
Host](../../developing/partitioned_regions/set_enforce_unique_host.html)**
+-   **[Set Enforce Unique Host](set_enforce_unique_host.html)**
 
-    Configure Geode to use only unique physical machines for redundant copies 
of partitioned region data.
+    Configure <%=vars.product_name%> to use only unique physical machines for 
redundant copies of partitioned region data.
 
--   **[Configure Member Crash Redundancy Recovery for a Partitioned 
Region](../../developing/partitioned_regions/set_crash_redundancy_recovery.html)**
+-   **[Configure Member Crash Redundancy Recovery for a Partitioned 
Region](set_crash_redundancy_recovery.html)**
 
     Configure whether and how redundancy is recovered in a partition region 
after a member crashes.
 
--   **[Configure Member Join Redundancy Recovery for a Partitioned 
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html)**
+-   **[Configure Member Join Redundancy Recovery for a Partitioned 
Region](set_join_redundancy_recovery.html)**
 
     Configure whether and how redundancy is recovered in a partition region 
after a member joins.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
 
b/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
index 0876613..62e5cab 100644
--- 
a/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/custom_partitioning_and_data_colocation.html.md.erb
@@ -23,7 +23,7 @@ Custom partitioning and data colocation can be used 
separately or in conjunction
 
 ## <a 
id="custom_partitioning_and_data_colocation__section_ABFEE9CB17AF44F1AE252AC10FB5E999"
 class="no-quick-link"></a>Custom Partitioning
 
-Use custom partitioning to group like entries into region buckets within a 
region. By default, Geode assigns new entries to buckets based on the entry key 
contents. With custom partitioning, you can assign your entries to buckets in 
whatever way you want.
+Use custom partitioning to group like entries into region buckets within a 
region. By default, <%=vars.product_name%> assigns new entries to buckets based 
on the entry key contents. With custom partitioning, you can assign your 
entries to buckets in whatever way you want.
 
 You can generally get better performance if you use custom partitioning to 
group similar data within a region. For example, a query run on all accounts 
created in January runs faster if all January account data is hosted by a 
single member. Grouping all data for a single customer can improve performance 
of data operations that work on customer data. Data aware function execution 
takes advantage of custom partitioning.
 
@@ -40,19 +40,19 @@ All keys must be strings, specified with a syntax that 
includes
 a '|' character that delimits the string.
 The substring that precedes the '|' delimiter within the key
 partitions the entry.  
--   **Standard custom partitioning**. With standard partitioning, you group 
entries into buckets, but you do not specify where the buckets reside. Geode 
always keeps the entries in the buckets you have specified, but may move the 
buckets around for load balancing.
+-   **Standard custom partitioning**. With standard partitioning, you group 
entries into buckets, but you do not specify where the buckets reside. 
<%=vars.product_name%> always keeps the entries in the buckets you have 
specified, but may move the buckets around for load balancing.
 -   **Fixed custom partitioning**. With fixed partitioning, you provide 
standard partitioning plus you specify the exact member where each data entry 
resides. You do this by assigning the data entry to a bucket and to a partition 
and by naming specific members as primary and secondary hosts of each partition.
 
     This gives you complete control over the locations of your primary and any 
secondary buckets for the region. This can be useful when you want to store 
specific data on specific physical machines or when you need to keep data close 
to certain hardware elements.
 
     Fixed partitioning has these requirements and caveats:
 
-    -   Geode cannot rebalance fixed partition region data because it cannot 
move the buckets around among the host members. You must carefully consider 
your expected data loads for the partitions you create.
+    -   <%=vars.product_name%> cannot rebalance fixed partition region data 
because it cannot move the buckets around among the host members. You must 
carefully consider your expected data loads for the partitions you create.
     -   With fixed partitioning, the region configuration is different between 
host members. Each member identifies the named partitions it hosts, and whether 
it is hosting the primary copy or a secondary copy. You then program fixed 
partition resolver to return the partition id, so the entry is placed on the 
right members. Only one member can be primary for a particular partition name 
and that member cannot be the partition's secondary.
 
 ## <a 
id="custom_partitioning_and_data_colocation__section_D2C66951FE38426F9C05050D2B9028D8"
 class="no-quick-link"></a>Data Colocation Between Regions
 
-With data colocation, Geode stores entries that are related across multiple 
data regions in a single member. Geode does this by storing all of the regions' 
buckets with the same ID together in the same member. During rebalancing 
operations, Geode moves these bucket groups together or not at all.
+With data colocation, <%=vars.product_name%> stores entries that are related 
across multiple data regions in a single member. <%=vars.product_name%> does 
this by storing all of the regions' buckets with the same ID together in the 
same member. During rebalancing operations, <%=vars.product_name%> moves these 
bucket groups together or not at all.
 
 So, for example, if you have one region with customer contact information and 
another region with customer orders, you can use colocation to keep all contact 
information and all orders for a single customer in a single member. This way, 
any operation done for a single customer uses the cache of only a single member.
 
@@ -60,6 +60,6 @@ This figure shows two regions with data colocation where the 
data is partitioned
 
 <img src="../../images_svg/colocated_partitioned_regions.svg" 
id="custom_partitioning_and_data_colocation__image_525AC474950F473ABCDE8E372583C5DF"
 class="image" />
 
-Data colocation requires the same data partitioning mechanism for all of the 
colocated regions. You can use the default partitioning provided by Geode or 
any of the custom partitioning strategies.
+Data colocation requires the same data partitioning mechanism for all of the 
colocated regions. You can use the default partitioning provided by 
<%=vars.product_name%> or any of the custom partitioning strategies.
 
 You must use the same high availability settings across your colocated regions.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb 
b/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
index c846995..42ea7f8 100644
--- 
a/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/how_partitioning_works.html.md.erb
@@ -32,13 +32,13 @@ A distributed system can have multiple partitioned regions, 
and it can mix parti
 
 ## <a id="how_partitioning_works__section_260C2455FC8C40A094B39BF585D06B7D" 
class="no-quick-link"></a>Data Partitioning
 
-Geode automatically determines the physical location of data in the members 
that host a partitioned region's data. Geode breaks partitioned region data 
into units of storage known as buckets and stores each bucket in a region host 
member. Buckets are distributed in accordance to the member’s region 
attribute settings.
+<%=vars.product_name%> automatically determines the physical location of data 
in the members that host a partitioned region's data. <%=vars.product_name%> 
breaks partitioned region data into units of storage known as buckets and 
stores each bucket in a region host member. Buckets are distributed in 
accordance to the member’s region attribute settings.
 
 When an entry is created, it is assigned to a bucket. Keys are grouped 
together in a bucket and always remain there. If the configuration allows, the 
buckets may be moved between members to balance the load.
 
 You must run the data stores needed to accommodate storage for the partitioned 
region’s buckets. You can start new data stores on the fly. When a new data 
store creates the region, it takes responsibility for as many buckets as 
allowed by the partitioned region and member configuration.
 
-You can customize how Geode groups your partitioned region data with custom 
partitioning and data colocation.
+You can customize how <%=vars.product_name%> groups your partitioned region 
data with custom partitioning and data colocation.
 
 ## <a id="how_partitioning_works__section_155F9D4AB539473F848FD05E413B21B3" 
class="no-quick-link"></a>Partitioned Region Operation
 
@@ -52,7 +52,7 @@ Keep the following in mind about partitioned regions:
 
 -   Partitioned regions never run asynchronously. Operations in partitioned 
regions always wait for acknowledgement from the caches containing the original 
data entry and any redundant copies.
 -   A partitioned region needs a cache loader in every region data store 
(`local-max-memory` &gt; 0).
--   Geode distributes the data buckets as evenly as possible across all 
members storing the partitioned region data, within the limits of any custom 
partitioning or data colocation that you use. The number of buckets allotted 
for the partitioned region determines the granularity of data storage and thus 
how evenly the data can be distributed. The number of buckets is a total for 
the entire region across the distributed system.
--   In rebalancing data for the region, Geode moves buckets, but does not move 
data around inside the buckets.
+-   <%=vars.product_name%> distributes the data buckets as evenly as possible 
across all members storing the partitioned region data, within the limits of 
any custom partitioning or data colocation that you use. The number of buckets 
allotted for the partitioned region determines the granularity of data storage 
and thus how evenly the data can be distributed. The number of buckets is a 
total for the entire region across the distributed system.
+-   In rebalancing data for the region, <%=vars.product_name%> moves buckets, 
but does not move data around inside the buckets.
 -   You can query partitioned regions, but there are certain limitations. See 
[Querying Partitioned 
Regions](../querying_basics/querying_partitioned_regions.html#querying_partitioned_regions)
 for more information.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb 
b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
index ba83732..baa5e56 100644
--- a/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
+++ b/geode-docs/developing/partitioned_regions/how_pr_ha_works.html.md.erb
@@ -25,7 +25,7 @@ With high availability, each member that hosts data for the 
partitioned region g
 
 With redundancy, if one member fails, operations continue on the partitioned 
region with no interruption of service:
 
--   If the member hosting the primary copy is lost, Geode makes a secondary 
copy the primary. This might cause a temporary loss of redundancy, but not a 
loss of data.
+-   If the member hosting the primary copy is lost, <%=vars.product_name%> 
makes a secondary copy the primary. This might cause a temporary loss of 
redundancy, but not a loss of data.
 -   Whenever there are not enough secondary copies to satisfy redundancy, the 
system works to recover redundancy by assigning another member as secondary and 
copying the data to it.
 
 **Note:**
@@ -37,20 +37,20 @@ Without redundancy, the loss of any of the region's data 
stores causes the loss
 
 ## <a id="how_pr_ha_works__section_7045530D601F4C65A062B5FDD0DD9206" 
class="no-quick-link"></a>Controlling Where Your Primaries and Secondaries 
Reside
 
-By default, Geode places your primary and secondary data copies for you, 
avoiding placement of two copies on the same physical machine. If there are not 
enough machines to keep different copies separate, Geode places copies on the 
same physical machine. You can change this behavior, so Geode only places 
copies on separate machines.
+By default, <%=vars.product_name%> places your primary and secondary data 
copies for you, avoiding placement of two copies on the same physical machine. 
If there are not enough machines to keep different copies separate, 
<%=vars.product_name%> places copies on the same physical machine. You can 
change this behavior, so <%=vars.product_name%> only places copies on separate 
machines.
 
-You can also control which members store your primary and secondary data 
copies. Geode provides two options:
+You can also control which members store your primary and secondary data 
copies. <%=vars.product_name%> provides two options:
 
--   **Fixed custom partitioning**. This option is set for the region. Fixed 
partitioning gives you absolute control over where your region data is hosted. 
With fixed partitioning, you provide Geode with the code that specifies the 
bucket and data store for each data entry in the region. When you use this 
option with redundancy, you specify the primary and secondary data stores. 
Fixed partitioning does not participate in rebalancing because all bucket 
locations are fixed by you.
--   **Redundancy zones**. This option is set at the member level. Redundancy 
zones let you separate primary and secondary copies by member groups, or zones. 
You assign each data host to a zone. Then Geode places redundant copies in 
different redundancy zones, the same as it places redundant copies on different 
physical machines. You can use this to split data copies across different 
machine racks or networks, This option allows you to add members on the fly and 
use rebalancing to redistribute the data load, with redundant data maintained 
in separate zones. When you use redundancy zones, Geode will not place two 
copies of the data in the same zone, so make sure you have enough zones.
+-   **Fixed custom partitioning**. This option is set for the region. Fixed 
partitioning gives you absolute control over where your region data is hosted. 
With fixed partitioning, you provide <%=vars.product_name%> with the code that 
specifies the bucket and data store for each data entry in the region. When you 
use this option with redundancy, you specify the primary and secondary data 
stores. Fixed partitioning does not participate in rebalancing because all 
bucket locations are fixed by you.
+-   **Redundancy zones**. This option is set at the member level. Redundancy 
zones let you separate primary and secondary copies by member groups, or zones. 
You assign each data host to a zone. Then <%=vars.product_name%> places 
redundant copies in different redundancy zones, the same as it places redundant 
copies on different physical machines. You can use this to split data copies 
across different machine racks or networks, This option allows you to add 
members on the fly and use rebalancing to redistribute the data load, with 
redundant data maintained in separate zones. When you use redundancy zones, 
<%=vars.product_name%> will not place two copies of the data in the same zone, 
so make sure you have enough zones.
 
 ## <a id="how_pr_ha_works__section_87A2429B6277497184926E08E64B81C6" 
class="no-quick-link"></a>Running Processes in Virtual Machines
 
-By default, Geode stores redundant copies on different machines. When you run 
your processes in virtual machines, the normal view of the machine becomes the 
VM and not the physical machine. If you run multiple VMs on the same physical 
machine, you could end up storing partitioned region primary buckets in 
separate VMs, but on the same physical machine as your secondaries. If the 
physical machine fails, you can lose data. When you run in VMs, you can 
configure Geode to identify the physical machine and store redundant copies on 
different physical machines.
+By default, <%=vars.product_name%> stores redundant copies on different 
machines. When you run your processes in virtual machines, the normal view of 
the machine becomes the VM and not the physical machine. If you run multiple 
VMs on the same physical machine, you could end up storing partitioned region 
primary buckets in separate VMs, but on the same physical machine as your 
secondaries. If the physical machine fails, you can lose data. When you run in 
VMs, you can configure <%=vars.product_name%> to identify the physical machine 
and store redundant copies on different physical machines.
 
 ## <a id="how_pr_ha_works__section_CAB9440BABD6484D99525766E937CB55" 
class="no-quick-link"></a>Reads and Writes in Highly-Available Partitioned 
Regions
 
-Geode treats reads and writes differently in highly-available partitioned 
regions than in other regions because the data is available in multiple members:
+<%=vars.product_name%> treats reads and writes differently in highly-available 
partitioned regions than in other regions because the data is available in 
multiple members:
 
 -   Write operations (like `put` and `create`) go to the primary for the data 
keys and then are distributed synchronously to the redundant copies. Events are 
sent to the members configured with `subscription-attributes` `interest-policy` 
set to `all`.
 -   Read operations go to any member holding a copy of the data, with the 
local cache favored, so a read intensive system can scale much better and 
handle higher loads.

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
 
b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
index 358b1a1..c48e328 100644
--- 
a/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/join_query_partitioned_regions.html.md.erb
@@ -19,7 +19,7 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-In order to perform equi-join operations on partitioned regions or partitioned 
regions and replicated regions, you need to use the `query.execute` method and 
supply it with a function execution context. You need to use Geode's 
FunctionService executor because join operations are not yet directly supported 
for partitioned regions without providing a function execution context.
+In order to perform equi-join operations on partitioned regions or partitioned 
regions and replicated regions, you need to use the `query.execute` method and 
supply it with a function execution context. You need to use 
<%=vars.product_name%>'s FunctionService executor because join operations are 
not yet directly supported for partitioned regions without providing a function 
execution context.
 
 See [Partitioned Region Query 
Restrictions](../query_additional/partitioned_region_query_restrictions.html#concept_5353476380D44CC1A7F586E5AE1CE7E8)
 for more information on partitioned region query limitations.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
 
b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
index 1221873..b2ebc08 100644
--- 
a/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/overview_custom_partitioning_and_data_colocation.html.md.erb
@@ -19,18 +19,18 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-You can customize how Apache Geode groups your partitioned region data with 
custom partitioning and data colocation.
+You can customize how <%=vars.product_name_long%> groups your partitioned 
region data with custom partitioning and data colocation.
 
--   **[Understanding Custom Partitioning and Data 
Colocation](../../developing/partitioned_regions/custom_partitioning_and_data_colocation.html)**
+-   **[Understanding Custom Partitioning and Data 
Colocation](custom_partitioning_and_data_colocation.html)**
 
     Custom partitioning and data colocation can be used separately or in 
conjunction with one another.
 
--   **[Custom-Partition Your Region 
Data](../../developing/partitioned_regions/using_custom_partition_resolvers.html)**
+-   **[Custom-Partition Your Region 
Data](using_custom_partition_resolvers.html)**
 
-    By default, Geode partitions each data entry into a bucket using a hashing 
policy on the key. Additionally, the physical location of the key-value pair is 
abstracted away from the application. You can change these policies for a 
partitioned region. You can provide your own data partitioning resolver and you 
can additionally specify which members host which data buckets.
+    By default, <%=vars.product_name%> partitions each data entry into a 
bucket using a hashing policy on the key. Additionally, the physical location 
of the key-value pair is abstracted away from the application. You can change 
these policies for a partitioned region. You can provide your own data 
partitioning resolver and you can additionally specify which members host which 
data buckets.
 
--   **[Colocate Data from Different Partitioned 
Regions](../../developing/partitioned_regions/colocating_partitioned_region_data.html)**
+-   **[Colocate Data from Different Partitioned 
Regions](colocating_partitioned_region_data.html)**
 
-    By default, Geode allocates the data locations for a partitioned region 
independent of the data locations for any other partitioned region. You can 
change this policy for any group of partitioned regions, so that cross-region, 
related data is all hosted by the same member. This colocation speeds queries 
and other operations that access data from the regions.
+    By default, <%=vars.product_name%> allocates the data locations for a 
partitioned region independent of the data locations for any other partitioned 
region. You can change this policy for any group of partitioned regions, so 
that cross-region, related data is all hosted by the same member. This 
colocation speeds queries and other operations that access data from the 
regions.
 
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
 
b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
index 889c56c..e12ddc5 100644
--- 
a/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/overview_how_pr_ha_works.html.md.erb
@@ -19,13 +19,13 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
-By default, Apache Geode stores only a single copy of your partitioned region 
data among the region's data stores. You can configure Geode to maintain 
redundant copies of your partitioned region data for high availability.
+By default, <%=vars.product_name_long%> stores only a single copy of your 
partitioned region data among the region's data stores. You can configure 
<%=vars.product_name%> to maintain redundant copies of your partitioned region 
data for high availability.
 
--   **[Understanding High Availability for Partitioned 
Regions](../../developing/partitioned_regions/how_pr_ha_works.html)**
+-   **[Understanding High Availability for Partitioned 
Regions](how_pr_ha_works.html)**
 
     With high availability, each member that hosts data for the partitioned 
region gets some primary copies and some redundant (secondary) copies.
 
--   **[Configure High Availability for a Partitioned 
Region](../../developing/partitioned_regions/configuring_ha_for_pr.html)**
+-   **[Configure High Availability for a Partitioned 
Region](configuring_ha_for_pr.html)**
 
     Configure in-memory high availability for your partitioned region. Set 
other high-availability options, like redundancy zones and redundancy recovery 
strategies.
 

http://git-wip-us.apache.org/repos/asf/geode/blob/ed9a8fd4/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
 
b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
index 8be43f6..13d7498 100644
--- 
a/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
+++ 
b/geode-docs/developing/partitioned_regions/overview_how_pr_single_hop_works.html.md.erb
@@ -21,11 +21,11 @@ limitations under the License.
 
 Single-hop data access enables the client pool to track where a partitioned 
region’s data is hosted in the servers. To access a single entry, the client 
directly contacts the server that hosts the key, in a single hop.
 
--   **[Understanding Client Single-Hop Access to Server-Partitioned 
Regions](../../developing/partitioned_regions/how_pr_single_hop_works.html)**
+-   **[Understanding Client Single-Hop Access to Server-Partitioned 
Regions](how_pr_single_hop_works.html)**
 
     With single-hop access the client connects to every server, so more 
connections are generally used. This works fine for smaller installations, but 
is a barrier to scaling.
 
--   **[Configure Client Single-Hop Access to Server-Partitioned 
Regions](../../developing/partitioned_regions/configure_pr_single_hop.html)**
+-   **[Configure Client Single-Hop Access to Server-Partitioned 
Regions](configure_pr_single_hop.html)**
 
     Configure your client/server system for direct, single-hop access to 
partitioned region data in the servers.
 

Reply via email to