http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
 
b/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
deleted file mode 100644
index dc9f198..0000000
--- 
a/geode-docs/developing/transactions/transactional_and_nontransactional_ops.html.md.erb
+++ /dev/null
@@ -1,117 +0,0 @@
----
-title: Comparing Transactional and Non-Transactional Operations
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-
-Between the begin operation and the commit or rollback operation are a series 
of ordinary Geode operations. When they are launched from within a transaction, 
the Geode operations can be classified into two types:
-
--   Transactional operations affect the transactional view
--   Non-transactional operations do not affect the transactional view
-
-An operation that acts directly on the cache does not usually act on the 
transactional view.
-
--   **[Transactional Operations](#transactional_operations)**
-
--   **[Non-Transactional Operations](#non_transactional_operations)**
-
--   **[Entry Operations](#entry_operations)**
-
--   **[Region Operations](#region_operations)**
-
--   **[Cache Operations](#cache_operations)**
-
--   **[No-Ops](#no-ops)**
-
-## <a id="transactional_operations" class="no-quick-link"></a>Transactional 
Operations
-
-The `CacheTransactionManager` methods are the only ones used specifically for 
cache operations. Otherwise, you use the same methods as usual. Most methods 
that run within a transaction affect the transactional view, and they do not 
change the cache until the transaction commits. Methods that behave this way 
are considered transactional operations. Transactional operations are 
classified in two ways: whether they modify the transactional view or the cache 
itself, and whether they create write conflicts with other transactions.
-
-In general, methods that create, destroy, invalidate, update, or read region 
entries are transactional operations.
-
-Transactional operations that can cause write conflicts are those that modify 
an entry, such as put, a load done to satisfy a get operation, create, delete, 
local delete, invalidate and local invalidate.
-
-Transactional read operations do not cause conflicts directly, but they can 
modify the transactional view. Read operations look for the entry in the 
transaction view first and then, if necessary, go to the cache. If the entry is 
returned by a cache read, it is stored as part of the transactional view. At 
commit time, the transaction uses the initial snapshot of the entry in the view 
to discover write conflicts.
-
-## <a id="non_transactional_operations" 
class="no-quick-link"></a>Non-Transactional Operations
-
-A few methods, when invoked within a transaction, have no effect on the 
transactional view, but they have an immediate effect on the cache. They are 
considered non-transactional operations. Often, non-transactional operations 
are administrative, such as `Region.destroy` and `Region.invalidate`. These 
operations are not supported within a transaction. If you call them, the system 
throws an exception of type `UnsupportedOperationInTransactionException`.
-
-## <a id="entry_operations" class="no-quick-link"></a>Entry Operations
-
-**Note:**
-Transactional entry operations can be rolled back.
-
-| Operations                           | Methods                               
                                                                               
| Transactional                                                                 
  | Write Conflict |
-|--------------------------------------|----------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|----------------|
-| create                               | `Region.create, put, putAll, Map.put, 
putAll`                                                                        
| yes                                                                           
  | yes            |
-| modify                               | `Region.put, putAll, Map.put, putAll, 
Region.Entry.setValue,                                     Map.Entry.setValue` 
| yes                                                                           
  | yes            |
-| load                                 | `Region.get, Map.get`                 
                                                                               
| yes                                                                           
  | yes            |
-| creation or update using `netSearch` | `Region.get, Map.get`                 
                                                                               
| yes                                                                           
  | no             |
-| destroy: local and distributed       | `Region.localDestroy, destroy, 
remove, Map.remove`                                                             
      | yes                                                                     
        | yes            |
-| invalidate: local and distributed    | `Region.localInvalidate, invalidate`  
                                                                               
| yes                                                                           
  | yes            |
-| set user attribute                   | `Region.Entry.setUserAttribute`       
                                                                               
| yes                                                                           
  | yes            |
-| read of a single entry               | `Region.get, getEntry, containsKey, 
containsValue,                                     containsValueForKey`         
 | yes                                                                          
   | no             |
-| read of a collection of entries      | `Region.keySet, entrySet, values`     
                                                                               
| Becomes transactional when you access the keys or values within the 
collection. | no             |
-
-Some transactional write operations also do a read before they write, and 
these can complete a transactional read even when the write fails. The 
following table of entry operations notes the conditions under which this can 
happen.
-
-**Note:**
-These operations can add a snapshot of an entry to the transaction’s view 
even when the write operation does not succeed.
-
-| Operations                        | Methods                              | 
Reads Without Writing                                                     |
-|-----------------------------------|--------------------------------------|---------------------------------------------------------------------------|
-| create                            | `Region.create`                      | 
when it throws an `EntryExistsException`                                  |
-| destroy: local and distributed    | `Region.localDestroy, destroy`       | 
when it throws an `EntryNotFoundException`                                |
-| invalidate: local and distributed | `Region.localInvalidate, invalidate` | 
when it throws an `EntryNotFoundException`or the entry is already invalid |
-
-## <a id="region_operations" class="no-quick-link"></a>Region Operations
-
-When you create a region in a transaction, any data from the getInitialImage 
operation goes directly into the cache, rather than waiting for the transaction 
to commit.
-
-| Operations                        | Methods                                  
        | Affected              | Write Conflict |
-|-----------------------------------|--------------------------------------------------|-----------------------|----------------|
-| destroy: local and distributed    | `Region.localDestroyRegion, 
destroyRegion`       | cache                 | yes            |
-| invalidate: local and distributed | `Region.localInvalidateRegion, 
invalidateRegion` | cache                 | yes            |
-| clear: local and distributed      | `Region.localClear, clear, Map.clear`    
        | cache and transaction | no             |
-| close                             | `Region.close`                           
        | cache                 | yes            |
-| mutate attribute                  | `Region.getAttributesMutator` methods    
        | cache                 | no             |
-| set user attribute                | `Region.setUserAttribute`                
        | cache                 | no             |
-
-## <a id="cache_operations" class="no-quick-link"></a>Cache Operations
-
-When you create a region in a transaction, any data from the getInitialImage 
operation goes directly into the cache, rather than waiting for the transaction 
to commit.
-
-| Operations | Methods                          | Affected State | Write 
Conflict |
-|------------|----------------------------------|----------------|----------------|
-| create     | `createRegionFactory().create()` | committed      | no          
   |
-| close      | `close`                          | committed      | yes         
   |
-
-## <a id="no-ops" class="no-quick-link"></a>No-Ops
-
-Any operation that has no effect in a non-transactional context remains a 
no-op in a transactional context. For example, if you do two `localInvalidate` 
operations in a row on the same region, the second `localInvalidate` is a 
no-op. No-op operations do not:
-
--   Cause a listener invocation
--   Cause a distribution message to be sent to other members
--   Cause a change to an entry
--   Cause any conflict
-
-A no-op can do a transactional read.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/transactions/transactional_function_example.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/transactions/transactional_function_example.html.md.erb 
b/geode-docs/developing/transactions/transactional_function_example.html.md.erb
deleted file mode 100644
index 2b8a8c6..0000000
--- 
a/geode-docs/developing/transactions/transactional_function_example.html.md.erb
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title:  Transaction Embedded within a Function Example
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-This example demonstrates a function that does transactional updates to 
Customer and Order regions.
-
-<a 
id="concept_22331B3DBFAB4C0BA95EF103BFB71257__section_73662C16E0BF4E4780F737C45DBD3137"></a>
-
-``` pre
-/**
- * This function does transactional updates to customer and order regions
- */
-public class TransactionalFunction extends FunctionAdapter {
-
-  private Random random = new Random();
-  /* (non-Javadoc)
-   * @see 
org.apache.geode.cache.execute.FunctionAdapter#execute(org.apache.geode.cache.execute.FunctionContext)
-   */
-  @Override
-  public void execute(FunctionContext context) {
-    RegionFunctionContext rfc = (RegionFunctionContext)context;
-    Region<CustomerId, String> custRegion = rfc.getDataSet();
-    Region<OrderId, String> 
-        orderRegion = custRegion.getRegionService().getRegion("order");
-
-    CacheTransactionManager 
-        mgr = CacheFactory.getAnyInstance().getCacheTransactionManager();
-    CustomerId custToUpdate = (CustomerId)rfc.getFilter().iterator().next();
-    OrderId orderToUpdate = (OrderId)rfc.getArguments();
-    System.out.println("Starting a transaction...");
-    mgr.begin();
-    int randomInt = random.nextInt(1000);
-    System.out.println("for customer region updating "+custToUpdate);
-    custRegion.put(custToUpdate, 
-        "updatedCustomer_"+custToUpdate.getCustId()+"_"+randomInt);
-    System.out.println("for order region updating "+orderToUpdate);
-    orderRegion.put(orderToUpdate, 
-        "newOrder_"+orderToUpdate.getOrderId()+"_"+randomInt);
-    mgr.commit();
-    System.out.println("transaction completed");
-    context.getResultSender().lastResult(Boolean.TRUE);
-  }
-
-  /* (non-Javadoc)
-   * @see org.apache.geode.cache.execute.FunctionAdapter#getId()
-   */
-  @Override
-  public String getId() {
-    return "TxFunction";
-  }
-
-}
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/transactions/transactions_overview.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/transactions/transactions_overview.html.md.erb 
b/geode-docs/developing/transactions/transactions_overview.html.md.erb
deleted file mode 100644
index 3daa989..0000000
--- a/geode-docs/developing/transactions/transactions_overview.html.md.erb
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title:  Basic Transaction Example
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-This example operates on two replicated regions. It begins a transaction, 
updates one entry in each region, and commits the result.
-
-<a 
id="concept_F8D96C21C8444F99B47909CDEB86E60A__section_B6818C348224456387DEC5C9D3B5F250"></a>
-If the commit fails, it will be due to a `CommitConflictException`, which 
implies that a concurrent access caused a change to one of the items operated 
on within this transaction. This code fragment catches the exception, and it 
repeats the transaction attempt until the commit succeeds.
-
-``` pre
-Cache c = new CacheFactory().create();
-
-Region<String, Integer> cash = c.createRegionFactory<String, Integer>()
-    .setDataPolicy(DataPolicy.REPLICATE)
-    .create("cash");
-
-Region<String, Integer> trades = c.createRegionFactory<String, Integer>()
-    .setDataPolicy(DataPolicy.REPLICATE)
-    .create("trades");
-
-CacheTransactionManager txmgr = c.getCacheTransactionManager();
-boolean commitConflict = false;
-do {
-    try {
-        txmgr.begin();
-        final String customer = "Customer1";
-        final Integer purchase = Integer.valueOf(1000);
-        // Decrement cash
-        Integer cashBalance = cash.get(customer);
-        Integer newBalance = 
-            Integer.valueOf((cashBalance != null ? cashBalance : 0) 
-                - purchase);
-        cash.put(customer, newBalance);
-        // Increment trades
-        Integer tradeBalance = trades.get(customer);
-        newBalance = 
-            Integer.valueOf((tradeBalance != null ? tradeBalance : 0) 
-                + purchase);
-
-        trades.put(customer, newBalance);
-        txmgr.commit();
-        commitConflict = false;
-    } 
-    catch (CommitConflictException conflict) {
-        commitConflict = true;
-    }
-} while (commitConflict);
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/transactions/turning_off_jta.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/developing/transactions/turning_off_jta.html.md.erb 
b/geode-docs/developing/transactions/turning_off_jta.html.md.erb
deleted file mode 100644
index 883ac68..0000000
--- a/geode-docs/developing/transactions/turning_off_jta.html.md.erb
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title:  Turning Off JTA Transactions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-You can configure regions to not participate in any JTA global transaction.
-
-The `ignore-jta` region attribute is a boolean that tells the cache to ignore 
any in-progress JTA transactions when performing cache operations. It is 
primarily used for cache loaders, cache writers, and cache listeners that need 
to perform non-transactional operations on a region, such as caching a result 
set. It is set per region, so some regions can participate in JTA transactions, 
while others avoid participating in them. This example sets the `ignore-jta` 
region attribute in the `cache.xml` file.
-
-cache.xml:
-
-``` pre
-<region name="bridge_region">
-   <region-attributes scope="local" ignore-jta="true" 
statistics-enabled="true"/> 
-       <cache-writer> . . . </cache-writer>
-    </region-attributes> 
-</region>
-```
-
-API:
-
-Using the API, you can turn off JTA transactions using `RegionFactory` and its 
method `setIgnoreJTA(boolean)`. The current setting for a region can be fetched 
from a region's `RegionAttributes` by using the `getIgnoreJTA` method.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/transactions/working_with_transactions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/developing/transactions/working_with_transactions.html.md.erb 
b/geode-docs/developing/transactions/working_with_transactions.html.md.erb
deleted file mode 100644
index 4a26d4c..0000000
--- a/geode-docs/developing/transactions/working_with_transactions.html.md.erb
+++ /dev/null
@@ -1,229 +0,0 @@
----
-title: Working with Geode Cache Transactions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-<a id="topic_tx2_gs4_5k"></a>
-
-
-This section contains guidelines and additional information on working with 
Geode and its cache transactions.
-
--   **[Setting Global Copy on Read](#concept_vx2_gs4_5k)**
-
--   **[Making a Safe Change Within a Transaction Using 
CopyHelper.copy](#concept_fdr_wht_vk)**
-
--   **[Transactions and Functions](#concept_ihn_zmt_vk)**
-
--   **[Using Queries and Indexes with Transactions](#concept_ty1_vnt_vk)**
-
--   **[Collections and Region.Entry Instances in 
Transactions](#concept_ksh_twz_vk)**
-
--   **[Using Eviction and Expiration Operations](#concept_vyt_txz_vk)**
-
--   **[Transactions and Consistent Regions](#transactions_and_consistency)**
-
--   **[Suspending and Resuming Transactions](#concept_u5b_ryz_vk)**
-
--   **[Using Cache Writer and Cache Listener Plug-Ins](#concept_ysx_nf1_wk)**
-
--   **[Configuring Transaction Plug-In Event Handlers](#concept_ocw_vf1_wk)**
-
--   **[How Transaction Events Are Managed](transaction_event_management.html)**
-
-## <a id="concept_vx2_gs4_5k" class="no-quick-link"></a>Setting Global Copy on 
Read
-
-As many entry operations return a reference to the cache entry, copy-on-read 
avoids problems within a transaction setting. To enable global copy-on-read for 
all reads, modify the `cache.xml` file or use the corresponding Java API call.
-
-Using cache.xml:
-
-``` pre
-<cache lock-lease="120" lock-timeout="60" search-timeout="300" 
copy-on-read="true">
-```
-
-API:
-
-``` pre
-Cache c = CacheFactory.getInstance(system);
- c.setCopyOnRead(true);
-```
-
-The copy-on-read attribute and the operations affected by the attribute 
setting are discussed in detail in [Managing Data 
Entries](../../basic_config/data_entries_custom_classes/managing_data_entries.html).
-
-## Making a Safe Change Within a Transaction Using CopyHelper.copy
-
-If `copy-on-read` is *not* globally set, and the cache uses replicated 
regions, explicitly make copies of the cache objects that are to be modified 
within a transaction. The `CopyHelper.copy` method makes copies:
-
-``` pre
-CacheTransactionManager cTxMgr = cache.getCacheTransactionManager();
-cTxMgr.begin(); 
-Object o = (StringBuffer) r.get("stringBuf");
-StringBuffer s = (StringBuffer) CopyHelper.copy(o);
-s.append("Changes unseen before commit. Read Committed."); 
-r.put("stringBuf", s); 
-cTxMgr.commit();
-```
-
-## Transactions and Functions
-
-You can run a function from inside a transaction and you can nest a 
transaction within a function, as long as your combination of functions and 
transactions does not result in nested transactions. See [Function 
Execution](../function_exec/chapter_overview.html) for more about functions.
-
-A single transaction may contain multiple functions.
-
-If you are suspending and resuming a transaction with multiple function calls, 
all functions in the transaction must execute on the same member.
-
-See [Transaction Embedded within a Function 
Example](transactional_function_example.html#concept_22331B3DBFAB4C0BA95EF103BFB71257)
 for an example.
-
-## Using Queries and Indexes with Transactions
-
-Queries and indexes reflect the cache contents and ignore the changes made by 
ongoing transactions. If you do a query from inside a transaction, the query 
does not reflect the changes made inside that transaction.
-
-## Collections and Region.Entry Instances in Transactions
-
-Collections and region entries used in a transaction must be created inside 
the transaction. After the transaction has completed, the application can no 
longer use any region entry or collection or associated iterator created within 
the transaction. An attempted use outside of the transaction will throw an 
`IllegalStateException` exception.
-
-Region collection operations include `Region.keySet`, `Region.entrySet`, and 
`Region.values`. You can create instances of `Region.Entry` through the 
`Region.getEntry` operation or by looking at the contents of the result 
returned by a `Region.entrySet` operation.
-
-## Using Eviction and Expiration Operations
-
-Entry expiration and LRU eviction affect the committed state. They are not 
part of a transaction, and therefore they cannot be rolled back.
-
-## About Eviction
-
-LRU eviction operations do not cause write conflicts with existing 
transactions, despite destroying or invalidating entries. LRU eviction is 
deferred on entries modified by the transaction until the commit completes. 
Because anything touched by the transaction has had its LRU clock reset, 
eviction of those entries is not likely to happen immediately after the commit.
-
-When a transaction commits its changes in a region with distributed scope, the 
operation can invoke eviction controllers in the remote caches, as well as in 
the local cache.
-
-## Configure Expiration
-
-Local expiration actions do not cause write conflicts, but distributed 
expiration can cause conflicts and prevent transactions from committing in the 
members receiving the distributed operation.
-
--   When you are using transactions on local, preloaded or empty regions, make 
expiration local if possible. For every instance of that region, configure an 
expiration action of local invalidate or local destroy. In a cache.xml 
declaration, use a line similar to this:
-
-    ``` pre
-    <expiration-attributes timeout="60" action="local-invalidate" />
-    ```
-
-    In regions modified by a transaction, local expiration is suspended. 
Expiration operations are batched and deferred per region until the transaction 
completes. Once cleanup starts, the manager processes pending expirations. 
Transactions that need to change the region wait until the expirations are 
complete.
-
--   With partitioned and replicated regions, you cannot use local expiration. 
When you are using distributed expiration, the expiration is not suspended 
during a transaction, and expiration operations distributed from another member 
can cause write conflicts. In replicated regions, you can avoid conflicts by 
setting up your distributed system this way:
-    -   Choose an instance of the region to drive region-wide expiration. Use 
a replicated region, if there is one.
-    -   Configure distributed expiration only in that region instance. The 
expiration action must be either invalidate or destroy. In a `cache.xml` file 
declaration, use a line similar to this:
-
-        ``` pre
-        <expiration-attributes timeout="300" action="destroy" />
-        ```
-
-    -   Run the transactions from the member in which expiration is configured.
-
-## Transactions and Consistent Regions
-
-A transaction that modifies a region in which consistency checking is enabled 
generates all necessary version information for region updates when the 
transaction commits.
-
-If a transaction modifies a normal, preloaded or empty region, the transaction 
is first delegated to a Geode member that holds a replicate for the region. 
This behavior is similar to the transactional behavior for partitioned regions, 
where the partitioned region transaction is forwarded to a member that hosts 
the primary for the partitioned region update.
-
-The limitation for transactions with a normal, preloaded or empty region is 
that, when consistency checking is enabled, a transaction cannot perform a 
`localDestroy` or `localInvalidate` operation against the region. Geode throws 
an `UnsupportedOperationInTransactionException` exception in such cases. An 
application should use a `Destroy` or `Invalidate` operation in place of a 
`localDestroy` or `localInvalidate` when consistency checks are enabled.
-
-## Suspending and Resuming Transactions
-
-The Geode `CacheTransactionManager` API provides the ability to suspend and 
resume transactions with the `suspend` and `resume` methods. The ability to 
suspend and resume is useful when a thread must perform some operations that 
should not be part of the transaction before the transaction can complete. A 
complex use case of suspend and resume implements a transaction that spans 
clients in which only one client at a time will not be suspended.
-
-Once a transaction is suspended, it loses the transactional view of the cache. 
None of the operations done within the transaction are visible to the thread. 
Any operations that are performed by the thread while the transaction is 
suspended are not part of the transaction.
-
-When a transaction is resumed, the resuming thread assumes the transactional 
view. A transaction that is suspended on a member must be resumed on the same 
member.
-
-Before resuming a transaction, you may want to check if the transaction exists 
on the member and whether it is suspended. The `tryResume` method implements 
this check and resume as an atomic step.
-
-If the member with the primary copy of the data crashes, the transactional 
view associated with that data is lost. The secondary member for the data will 
not be able to resume any transactions suspended on the crashed member. You 
will need to take remedial steps to retry the transaction on a new primary copy 
of the data.
-
-If a suspended transaction is not touched for a period of time, Geode cleans 
it up automatically. By default, the timeout for a suspended transaction is 30 
minutes and can be configured using the system property 
`gemfire.suspendedtxTimeout`. For example, `gemfire.suspendedtxTimeout=60` 
specifies a timeout of 60 minutes.
-
-See [Basic Suspend and Resume Transaction 
Example](transaction_suspend_resume_example.html) for a sample code fragment 
that suspends and resumes a transaction.
-
-## Using Cache Writer and Cache Listener Plug-Ins
-
-All standard Geode application plug-ins work with transactions. In addition, 
the transaction interface offers specialized plug-ins that support 
transactional operation.
-
-No direct interaction exists between client transactions and client 
application plug-ins. When a client runs a transaction, Geode calls the 
plug-ins that are installed on the transaction's server delegate and its server 
host. Client application plug-ins are not called for operations inside the 
transaction or for the transaction as a whole. When the transaction is 
committed, the changes to the server cache are sent to the client cache 
according to client interest registration. These events can result in calls to 
the client's `CacheListener`s, as with any other events received from the 
server.
-
-The `EntryEvent` that a callback receives has a unique Geode transaction ID, 
so the cache listener can associate each event, as it occurs, with a particular 
transaction. The transaction ID of an `EntryEvent` that is not part of a 
transaction is null to distinguish it from a transaction ID.
-
--   `CacheLoader`. When a cache loader is called by a transaction operation, 
values loaded by the cache loader may cause a write conflict when the 
transaction commits.
--   `CacheWriter`. During a transaction, if a cache writer exists, its methods 
are invoked as usual for all operations, as the operations are called in the 
transactions. The `netWrite` operation is not used. The only cache writer used 
is the one in the member where the transactional data resides.
--   `CacheListener`. The cache listener callbacks - local and remote - are 
triggered after the transaction commits. The system sends the conflated 
transaction events, in the order they were stored.
-
-For more information on writing cache event handlers, see [Implementing Cache 
Event Handlers](../events/implementing_cache_event_handlers.html).
-
-## <a id="concept_ocw_vf1_wk" class="no-quick-link"></a>Configuring 
Transaction Plug-In Event Handlers
-
-Geode has two types of transaction plug-ins: Transaction Writers and 
Transaction Listeners. You can optionally install one transaction writer and 
one or more transaction listeners per cache.
-
-Like JTA global transactions, you can use transaction plug-in event handlers 
to coordinate Geode cache transaction activity with an external data store. 
However, you typically use JTA global transactions when Geode is running as a 
peer data store with your external data stores. Transaction writers and 
listeners are typically used when Geode is acting as a front end cache to your 
backend database.
-
-**Note:**
-You can also use transaction plug-in event handlers when running JTA global 
transactions.
-
-## TransactionWriter
-
-When you commit a transaction, if a transaction writer is installed in the 
cache where the data updates were performed, it is called. The writer can do 
whatever work you need, including aborting the transaction.
-
-The transaction writer is the last place that an application can rollback a 
transaction. If the transaction writer throws any exception, the transaction is 
rolled back. For example, you might use a transaction writer to update a 
backend data source before the Geode cache transaction completes the commit. If 
the backend data source update fails, the transaction writer implementation can 
throw a 
[TransactionWriterException](/releases/latest/javadoc/org/apache/geode/cache/TransactionWriterException.html)
 to veto the transaction.
-
-A typical usage scenario would be to use the transaction writer to prepare the 
commit on the external database. Then in a transaction listener, you can apply 
the commit on the database.
-
-## Transaction Listeners
-
-When the transaction ends, its thread calls the transaction listener to 
perform the appropriate follow-up for successful commits, failed commits, or 
voluntary rollbacks. The transaction that caused the listener to be called no 
longer exists by the time the listener code executes.
-
-Transaction listeners have access to the transactional view and thus are not 
affected by non-transactional update operations. `TransactionListener` methods 
cannot make transactional changes or cause a rollback. They can, however, start 
a new transaction. Multiple transactions on the same cache can cause concurrent 
invocation of `TransactionListener` methods, so implement methods that do the 
appropriate synchronization of the multiple threads for thread-safe operation.
-
-A transaction listener can preserve the result of a transaction, perhaps to 
compare with other transactions, or for reference in case of a failed commit. 
When a commit fails and the transaction ends, the application cannot just retry 
the transaction, but must build up the data again. For most applications, the 
most efficient action is just to start a new transaction and go back through 
the application logic again.
-
-The rollback and failed commit operations are local to the member where the 
transactional operations are run. When a successful commit writes to a 
distributed or partitioned region, however, the transaction results are 
distributed to other members the same as other updates. The transaction 
listener on the receiving members reflect the changes the transaction makes in 
that member, not the originating member. Any exceptions thrown by the 
transaction listener are caught by Geode and logged.
-
-To configure a transaction listener, add a `cache-transaction-manager` 
configuration to the cache definition and define one or more instances of 
`transaction-listener` there. The only parameter to this `transaction-listener` 
is `URL`, which must be a string, as shown in the following cache.xml example.
-
-**Note:**
-The `cache-transaction-manager` allows listeners to be established. This 
attribute does not install a different transaction manager.
-
-Using cache.xml:
-
-``` pre
-<cache search-timeout="60">
-           <cache-transaction-manager>
-             <transaction-listener>
-               <class-name>com.company.data.MyTransactionListener</class-name>
-                 <parameter name="URL">
-                    <string>jdbc:cloudscape:rmi:MyData</string>
-                 </parameter>
-             </transaction-listener>
-             <transaction-listener>
-              . . .   
-             </transaction-listener> 
-          </cache-transaction-manager>
-               . . . 
-        </cache>
-```
-
-Using the Java API:
-
-``` pre
-CacheTransactionManager manager = cache.getCacheTransactionManager(); 
-manager.addListener(new LoggingTransactionListener());
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/getting_started/15_minute_quickstart_gfsh.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/getting_started/15_minute_quickstart_gfsh.html.md.erb 
b/geode-docs/getting_started/15_minute_quickstart_gfsh.html.md.erb
deleted file mode 100644
index ec1606a..0000000
--- a/geode-docs/getting_started/15_minute_quickstart_gfsh.html.md.erb
+++ /dev/null
@@ -1,516 +0,0 @@
----
-title: Apache Geode in 15 Minutes or Less
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-<a id="topic_FE3F28ED18E145F787431EC87B676A76"></a>
-
-Need a quick introduction to Apache Geode? Take this brief tour to try out 
basic features and functionality.
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_ECE5170BAD9B454E875F13BEB5762DDD"
 class="no-quick-link"></a>Step 1: Install Apache Geode.
-
-See [How to 
Install](installation/install_standalone.html#concept_0129F6A1D0EB42C4A3D24861AF2C5425)
 for instructions.
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_582F8CBBD99D42F1A55C07591E2E9E9E"
 class="no-quick-link"></a>Step 2: Use gfsh to start a Locator.
-
-In a terminal window, use the `gfsh` command line interface to start up a 
locator. Apache Geode *gfsh* (pronounced "jee-fish") provides a single, 
intuitive command-line interface from which you can launch, manage, and monitor 
Apache Geode processes, data, and applications. See [gfsh (Geode 
SHell)](../tools_modules/gfsh/chapter_overview.html).
-
-The *locator* is a Geode process that tells new, connecting members where 
running members are located and provides load balancing for server use. A 
locator, by default, starts up a JMX Manager, which is used for monitoring and 
managing of a Geode cluster. The cluster configuration service uses locators to 
persist and distribute cluster configurations to cluster members. See [Running 
Geode Locator Processes](../configuring/running/running_the_locator.html) and 
[Overview of the Cluster Configuration 
Service](../configuring/cluster_config/gfsh_persist.html).
-
-1.  Create a scratch working directory (for example, `my_gemfire`) and change 
directories into it. `gfsh` saves locator and server working directories and 
log files in this location.
-2.  Start gfsh by typing `gfsh` at the command line (or `gfsh.bat` if you are 
using Windows).
-
-    ``` pre
-        _________________________     __
-       / _____/ ______/ ______/ /____/ /
-      / /  __/ /___  /_____  / _____  /
-     / /__/ / ____/  _____/ / /    / /
-    /______/_/      /______/_/    /_/    v8.2.0
-
-    Monitor and Manage GemFire
-    gfsh>
-    ```
-
-3.  At the `gfsh` prompt, type:
-
-    ``` pre
-    gfsh>start locator --name=locator1
-    Starting a GemFire Locator in /home/username/my_gemfire/locator1...
-    .................................
-    Locator in /home/username/my_gemfire/locator1 on ubuntu.local[10334] as 
locator1 is currently online.
-    Process ID: 3529
-    Uptime: 18 seconds
-    GemFire Version: 8.2.0
-    Java Version: 1.8.0_60
-    Log File: /home/username/my_gemfire/locator1/locator1.log
-    JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false
-    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
-    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: 
/home/username/Pivotal_GemFire_820_b17919_Linux/lib/gemfire.jar:
-    
/home/username/Pivotal_GemFire_820_b17919_Linux/lib/locator-dependencies.jar
-
-    Successfully connected to: [host=ubuntu.local, port=1099]
-
-    Cluster configuration service is up and running.
-    ```
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_02C79BFFB5334E78A5856AE1EB1F1F84"
 class="no-quick-link"></a>Step 3. Start Pulse.
-
-Start up the browser-based Pulse monitoring tool. Pulse is a Web Application 
that provides a graphical dashboard for monitoring vital, real-time health and 
performance of Geode clusters, members, and regions. See [Geode 
Pulse](../tools_modules/pulse/chapter_overview.html).
-
-``` pre
-gfsh>start pulse
-```
-
-This command launches Pulse and automatically connects you to the JMX Manager 
running in the Locator. At the Pulse login screen, type in the default username 
`admin` and password `admin`.
-
-The Pulse application now displays the locator you just started (locator1):
-
-<img src="../images/pulse_locator.png" 
id="topic_FE3F28ED18E145F787431EC87B676A76__image_ign_ff5_t4" class="image" />
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_C617BC1C70EB41B8BCA3691D6E3C891A"
 class="no-quick-link"></a>Step 4: Start a server.
-
-A Geode server is a process that runs as a long-lived, configurable member of 
a cluster (also called a *distributed system*). The Geode server is used 
primarily for hosting long-lived data regions and for running standard Geode 
processes such as the server in a client/server configuration. See [Running 
Geode Server Processes](../configuring/running/running_the_cacheserver.html).
-
-Start the cache server:
-
-``` pre
-gfsh>start server --name=server1 --server-port=40411
-
-```
-
-This commands starts a cache server named "server1" on the specified port of 
40411.
-
-Observe the changes (new member and server) in Pulse. Try expanding the 
distributed system icon to see the locator and cache server graphically.
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_3EA12E44B8394C6A9302DF4D14888AF4"
 class="no-quick-link"></a>Step 5: Create a replicated, persistent region.
-
-In this step you create a region with the `gfsh` command line utility. Regions 
are the core building blocks of the Geode cluster and provide the means for 
organizing your data. The region you create for this exercise employs 
replication to replicate data across members of the cluster and utilizes 
persistence to save the data to disk. See [Data 
Regions](../basic_config/data_regions/chapter_overview.html#data_regions).
-
-1.  Create a replicated, persistent region:
-
-    ``` pre
-    gfsh>create region --name=regionA --type=REPLICATE_PERSISTENT
-    Member  | Status
-    ------- | --------------------------------------
-    server1 | Region "/regionA" created on "server1"
-    ```
-
-    Note that the region is hosted on server1.
-
-2.  Use the `gfsh` command line to view a list of the regions in the cluster.
-
-    ``` pre
-    gfsh>list regions
-    List of regions
-    ---------------
-    regionA
-    ```
-
-3.  List the members of your cluster. The locator and cache servers you 
started appear in the list:
-
-    ``` pre
-    gfsh>list members
-      Name   | Id
-    -------- | ---------------------------------------
-    locator1 | ubuntu(locator1:3529:locator)<v0>:59926
-    server1  | ubuntu(server1:3883)<v1>:65390
-    ```
-
-4.  To view specifics about a region, type the following:
-
-    ``` pre
-    gfsh>describe region --name=regionA
-    ..........................................................
-    Name            : regionA
-    Data Policy     : persistent replicate
-    Hosting Members : server1
-
-    Non-Default Attributes Shared By Hosting Members
-
-     Type  | Name | Value
-    ------ | ---- | -----
-    Region | size | 0
-    ```
-
-5.  In Pulse, click the green cluster icon to see all the new members and new 
regions that you just added to your cluster.
-
-**Note:** Keep this `gfsh` prompt open for the next steps.
-
-## Step 6: Manipulate data in the region and demonstrate persistence.
-
-Apache Geode manages data as key/value pairs. In most applications, a Java 
program adds, deletes and modifies stored data. You can also use gfsh commands 
to add and retrieve data. See [Data 
Commands](../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_C7DB8A800D6244AE8FF3ADDCF139DCE4).
-
-1.  Run the following `put` commands to add some data to the region:
-
-    ``` pre
-    gfsh>put --region=regionA --key="1" --value="one"
-    Result      : true
-    Key Class   : java.lang.String
-    Key         : 1
-    Value Class : java.lang.String
-    Old Value   : <NULL>
-
-
-    gfsh>put --region=regionA --key="2" --value="two"
-    Result      : true
-    Key Class   : java.lang.String
-    Key         : 2
-    Value Class : java.lang.String
-    Old Value   : <NULL>
-    ```
-
-2.  Run the following command to retrieve data from the region:
-
-    ``` pre
-    gfsh>query --query="select * from /regionA"
-
-    Result     : true
-    startCount : 0
-    endCount   : 20
-    Rows       : 2
-
-    Result
-    ------
-    two
-    one
-    ```
-
-    Note that the result displays the values for the two data entries you 
created with the `put` commands.
-
-    See [Data 
Entries](../basic_config/data_entries_custom_classes/chapter_overview.html).
-
-3.  Stop the cache server using the following command:
-
-    ``` pre
-    gfsh>stop server --name=server1
-    Stopping Cache Server running in /home/username/my_gemfire/server1 on 
ubuntu.local[40411] as server1...
-    Process ID: 3883
-    Log File: /home/username/my_gemfire/server1/server1.log
-    ....
-    ```
-
-4.  Restart the cache server using the following command:
-
-    ``` pre
-    gfsh>start server --name=server1 --server-port=40411
-    ```
-
-5.  Run the following command to retrieve data from the region again-- notice 
that the data is still available:
-
-    ``` pre
-    gfsh>query --query="select * from /regionA"
-
-    Result     : true
-    startCount : 0
-    endCount   : 20
-    Rows       : 2
-
-    Result
-    ------
-    two
-    one
-    ```
-
-    Because regionA uses persistence, it writes a copy of the data to disk. 
When a server hosting regionA starts, the data is populated into the cache. 
Note that the result displays the values for the two data entries you created 
prior to stopping the server with the `put` commands.
-
-    See [Data 
Entries](../basic_config/data_entries_custom_classes/chapter_overview.html).
-
-    See [Data 
Regions](../basic_config/data_regions/chapter_overview.html#data_regions).
-
-## Step 7: Examine the effects of replication.
-
-In this step, you start a second cache server. Because regionA is replicated, 
the data will be available on any server hosting the region.
-
-See [Data 
Regions](../basic_config/data_regions/chapter_overview.html#data_regions).
-
-1.  Start a second cache server:
-
-    ``` pre
-    gfsh>start server --name=server2 --server-port=40412
-    ```
-
-2.  Run the `describe region` command to view information about regionA:
-
-    ``` pre
-    gfsh>describe region --name=regionA
-    ..........................................................
-    Name            : regionA
-    Data Policy     : persistent replicate
-    Hosting Members : server1
-                      server2
-
-    Non-Default Attributes Shared By Hosting Members
-
-     Type  | Name | Value
-    ------ | ---- | -----
-    Region | size | 2
-    ```
-
-    Note that you do not need to create regionA again for server2. The output 
of the command shows that regionA is hosted on both server1 and server2. When 
gfsh starts a server, it requests the configuration from the cluster 
configuration service which then distributes the shared configuration to any 
new servers joining the cluster.
-
-3.  Add a third data entry:
-
-    ``` pre
-    gfsh>put --region=regionA --key="3" --value="three"
-    Result      : true
-    Key Class   : java.lang.String
-    Key         : 3
-    Value Class : java.lang.String
-    Old Value   : <NULL>
-    ```
-
-4.  Open the Pulse application (in a Web browser) and observe the cluster 
topology. You should see a locator with two attached servers. Click the <span 
class="ph uicontrol">Data</span> tab to view information about regionA.
-5.  Stop the first cache server with the following command:
-
-    ``` pre
-    gfsh>stop server --name=server1
-    Stopping Cache Server running in /home/username/my_gemfire/server1 on 
ubuntu.local[40411] as server1...
-    Process ID: 4064
-    Log File: /home/username/my_gemfire/server1/server1.log
-    ....
-    ```
-
-6.  Retrieve data from the remaining cache server.
-
-    ``` pre
-    gfsh>query --query="select * from /regionA"
-
-    Result     : true
-    startCount : 0
-    endCount   : 20
-    Rows       : 3
-
-    Result
-    ------
-    two
-    one
-    three
-    ```
-
-    Note that the data contains 3 entries, including the entry you just added.
-
-7.  Add a fourth data entry:
-
-    ``` pre
-    gfsh>put --region=regionA --key="4" --value="four"
-    Result      : true
-    Key Class   : java.lang.String
-    Key         : 3
-    Value Class : java.lang.String
-    Old Value   : <NULL>
-    ```
-
-    Note that only server2 is running. Because the data is replicated and 
persisted, all of the data is still available. But the new data entry is 
currently only available on server 2.
-
-    ``` pre
-    gfsh>describe region --name=regionA
-    ..........................................................
-    Name            : regionA
-    Data Policy     : persistent replicate
-    Hosting Members : server2
-
-    Non-Default Attributes Shared By Hosting Members
-
-     Type  | Name | Value
-    ------ | ---- | -----
-    Region | size | 4
-    ```
-
-8.  Stop the remaining cache server:
-
-    ``` pre
-    gfsh>stop server --name=server2
-    Stopping Cache Server running in /home/username/my_gemfire/server2 on 
ubuntu.local[40412] as server2...
-    Process ID: 4185
-    Log File: /home/username/my_gemfire/server2/server2.log
-    .....
-    ```
-
-## Step 8: Restart the cache servers in parallel.
-
-In this step you restart the cache servers in parallel. Because the data is 
persisted, the data is available when the servers restart. Because the data is 
replicated, you must start the servers in parallel so that they can synchronize 
their data before starting.
-
-1.  Start server1. Because regionA is replicated and persistent, it needs data 
from the other server to start and waits for the server to start:
-
-    ``` pre
-    gfsh>start server --name=server1 --server-port=40411
-    Starting a GemFire Server in /home/username/my_gemfire/server1...
-    
............................................................................
-    
............................................................................
-    ```
-
-    Note that if you look in the <span class="ph filepath">server1.log</span> 
file for the restarted server, you will see a log message similar to the 
following:
-
-    ``` pre
-    [info 2015/01/14 09:08:13.610 PST server1 <main> tid=0x1] Region /regionA 
has pot
-    entially stale data. It is waiting for another member to recover the 
latest data.
-      My persistent id:
-
-        DiskStore ID: 8e2d99a9-4725-47e6-800d-28a26e1d59b1
-        Name: server1
-        Location: /192.0.2.0:/home/username/my_gemfire/server1/.
-
-      Members with potentially new data:
-      [
-        DiskStore ID: 2e91b003-8954-43f9-8ba9-3c5b0cdd4dfa
-        Name: server2
-        Location: /192.0.2.0:/home/username/my_gemfire/server2/.
-      ]
-      Use the "gemfire list-missing-disk-stores" command to see all disk 
stores that
-    are being waited on by other members.
-    ```
-
-2.  In a second terminal window, change directories to the scratch working 
directory (for example, `my_gemfire`) and start gfsh:
-
-    ``` pre
-    [username@localhost ~/my_gemfire]$ gfsh
-        _________________________     __
-       / _____/ ______/ ______/ /____/ /
-      / /  __/ /___  /_____  / _____  /
-     / /__/ / ____/  _____/ / /    / /
-    /______/_/      /______/_/    /_/    v8.2.0
-
-    Monitor and Manage GemFire
-    ```
-
-3.  Run the following command to connect to the cluster:
-
-    ``` pre
-    gfsh>connect --locator=localhost[10334]
-    Connecting to Locator at [host=localhost, port=10334] ..
-    Connecting to Manager at [host=ubuntu.local, port=1099] ..
-    Successfully connected to: [host=ubuntu.local, port=1099]
-    ```
-
-4.  Start server2:
-
-    ``` pre
-    gfsh>start server --name=server2 --server-port=40412
-    ```
-
-    When server2 starts, note that **server1 completes its start up** in the 
first gfsh window:
-
-    ``` pre
-    Server in /home/username/my_gemfire/server1 on ubuntu.local[40411] as 
server1 is currently online.
-    Process ID: 3402
-    Uptime: 1 minute 46 seconds
-    GemFire Version: 8.2.0
-    Java Version: 1.8.0_60
-    Log File: /home/username/my_gemfire/server1/server1.log
-    JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] 
-Dgemfire.use-cluster-configuration=true
-    -XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true
-    -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
-    Class-Path: 
/home/username/Pivotal_GemFire_820_b17919_Linux/lib/gemfire.jar:
-    /home/username/Pivotal_GemFire_820_b17919_Linux/lib/server-dependencies.jar
-    ```
-
-5.  Verify that the locator and two servers are running:
-
-    ``` pre
-    gfsh>list members
-      Name   | Id
-    -------- | ---------------------------------------
-    server2  | ubuntu(server2:3992)<v8>:21507
-    server1  | ubuntu(server1:3402)<v7>:36532
-    locator1 | ubuntu(locator1:2813:locator)<v0>:46644
-    ```
-
-6.  Run a query to verify that all the data you entered with the `put` 
commands is available:
-
-    ``` pre
-    gfsh>query --query="select * from /regionA"
-
-    Result     : true
-    startCount : 0
-    endCount   : 20
-    Rows       : 5
-
-    Result
-    ------
-    one
-    two
-    four
-    Three
-
-    NEXT_STEP_NAME : END
-    ```
-
-7.  Stop server2 with the following command:
-
-    ``` pre
-    gfsh>stop server --dir=server2
-    Stopping Cache Server running in /home/username/my_gemfire/server2 on 
192.0.2.0[40412] as server2...
-    Process ID: 3992
-    Log File: /home/username/my_gemfire/server2/server2.log
-    ....
-    ```
-
-8.  Run a query to verify that all the data you entered with the `put` 
commands is still available:
-
-    ``` pre
-    gfsh>query --query="select * from /regionA"
-
-    Result     : true
-    startCount : 0
-    endCount   : 20
-    Rows       : 5
-
-    Result
-    ------
-    one
-    two
-    four
-    Three
-
-    NEXT_STEP_NAME : END
-    ```
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_E417BEEC172B4E96A92A61DC7601E572"
 class="no-quick-link"></a>Step 9: Shut down the system including your locators.
-
-To shut down your cluster, do the following:
-
-1.  In the current `gfsh` session, stop the cluster:
-
-    ``` pre
-    gfsh>shutdown --include-locators=true
-    ```
-
-    See [shutdown](../tools_modules/gfsh/command-pages/shutdown.html).
-
-2.  When prompted, type 'Y' to confirm the shutdown of the cluster.
-
-    ``` pre
-    As a lot of data in memory will be lost, including possibly events in 
queues,
-    do you really want to shutdown the entire distributed system? (Y/n): Y
-    Shutdown is triggered
-
-    gfsh>
-    No longer connected to ubuntu.local[1099].
-    gfsh>
-    ```
-
-3.  Type `exit` to quit the gfsh shell.
-
-## <a 
id="topic_FE3F28ED18E145F787431EC87B676A76__section_C8694C6BB07E4430A73DDD72ABB473F1"
 class="no-quick-link"></a>Step 10: What to do next...
-
-Here are some suggestions on what to explore next with Apache Geode:
-
--   Continue reading the next section to learn more about the components and 
concepts that were just introduced.
--   To get more practice using `gfsh`, see [Tutorial—Performing Common Tasks 
with 
gfsh](../tools_modules/gfsh/tour_of_gfsh.html#concept_0B7DE9DEC1524ED0897C144EE1B83A34).
--   To learn about the cluster configuration service, see [Tutorial—Creating 
and Using a Cluster 
Configuration](../configuring/cluster_config/persisting_configurations.html#task_bt3_z1v_dl).

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/getting_started/book_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/getting_started/book_intro.html.md.erb 
b/geode-docs/getting_started/book_intro.html.md.erb
deleted file mode 100644
index 95d9f67..0000000
--- a/geode-docs/getting_started/book_intro.html.md.erb
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title:  Getting Started with Apache Geode
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-A tutorial demonstrates features, and a main features section describes key 
functionality.
-
--   **[About Apache Geode](geode_overview.html)**
-
-    Apache Geode is a data management platform that provides real-time, 
consistent access to data-intensive applications throughout widely distributed 
cloud architectures.
-
--   **[Main Features of Apache Geode](product_intro.html)**
-
-    This section summarizes the main features and key functionality of Apache 
Geode.
-
--   **[Prerequisites and Installation 
Instructions](../prereq_and_install.html)**
-
-    Each host of Apache Geode 1.0.0-incubating that meets a small set of 
prerequisites may follow the provided installation instructions.
-
--   **[Apache Geode in 15 Minutes or Less](15_minute_quickstart_gfsh.html)**
-
-    Need a quick introduction to Apache Geode? Take this brief tour to try out 
basic features and functionality.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/getting_started/geode_overview.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/getting_started/geode_overview.html.md.erb 
b/geode-docs/getting_started/geode_overview.html.md.erb
deleted file mode 100644
index 6f5c31f..0000000
--- a/geode-docs/getting_started/geode_overview.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title:  About Apache Geode
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Apache Geode is a data management platform that provides real-time, consistent 
access to data-intensive applications throughout widely distributed cloud 
architectures.
-
-<a id="concept_3B5E445B19884680900161BDF25E32C9__section_itx_b41_mr"></a>
-Geode pools memory, CPU, network resources, and optionally local disk across 
multiple processes to manage application objects and behavior. It uses dynamic 
replication and data partitioning techniques to implement high availability, 
improved performance, scalability, and fault tolerance. In addition to being a 
distributed data container, Geode is an in-memory data management system that 
provides reliable asynchronous event notifications and guaranteed message 
delivery.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_0031B81824874FC18F0828DB66150833"
 class="no-quick-link"></a>Main Concepts and Components
-
-*Caches* are an abstraction that describe a node in a Geode distributed 
system. Application architects can arrange these nodes in peer-to-peer or 
client/server topologies.
-
-Within each cache, you define data *regions*. Data regions are analogous to 
tables in a relational database and manage data in a distributed fashion as 
name/value pairs. A *replicated* region stores identical copies of the data on 
each cache member of a distributed system. A *partitioned* region spreads the 
data among cache members. After the system is configured, client applications 
can access the distributed data in regions without knowledge of the underlying 
system architecture. You can define listeners to create notifications about 
when data has changed, and you can define expiration criteria to delete 
obsolete data in a region.
-
-For large production systems, Geode provides *locators*. Locators provide both 
discovery and load balancing services. You configure clients with a list of 
locator services and the locators maintain a dynamic list of member servers. By 
default, Geode clients and servers use port 40404 to discover each other.
-
-<a id="concept_3B5E445B19884680900161BDF25E32C9__section_zrl_c41_mr"></a>
-
-For more information on product features, see [Main Features of Apache 
Geode](product_intro.html).

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/getting_started/installation/install_standalone.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/getting_started/installation/install_standalone.html.md.erb 
b/geode-docs/getting_started/installation/install_standalone.html.md.erb
deleted file mode 100644
index 04b347a..0000000
--- a/geode-docs/getting_started/installation/install_standalone.html.md.erb
+++ /dev/null
@@ -1,138 +0,0 @@
----
-title:  How to Install
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Build from source or use the ZIP or TAR distribution to install Apache Geode 
on every physical and virtual machine that will run Apache Geode.
-
-## Build from Source on Unix
-
-1.  Set the JAVA\_HOME environment variable.
-
-    ``` pre
-    JAVA_HOME=/usr/java/jdk1.8.0_60
-    export JAVA_HOME
-    ```
-
-2.  Download the project source from the Releases page found at 
[http://geode.incubator.apache.org](http://geode.incubator.apache.org/), and 
unpack the source code.
-3.  Within the directory containing the unpacked source code, build without 
tests:
-
-    ``` pre
-    $ ./gradlew build -Dskip.tests=true
-    ```
-
-    Or, build with the tests:
-
-    ``` pre
-    $ ./gradlew build 
-    ```
-
-4.  Verify the installation by invoking `gfsh` to print version information 
and exit. On Linux/Unix platforms, the version will be similar to:
-
-    ``` pre
-    $ cd geode-assembly/build/install/apache-geode
-    $ bin/gfsh version
-    v1.0.0-incubating
-    ```
-
-## Build from Source on Windows
-
-1.  Set the JAVA\_HOME environment variable. For example:
-
-    ``` pre
-    $ set JAVA_HOME="C:\Program Files\Java\jdk1.8.0_60" 
-    ```
-
-2.  Install Gradle, version 2.3 or a more recent version.
-3.  Download the project source from the Releases page found at 
[http://geode.incubator.apache.org](http://geode.incubator.apache.org/), and 
unpack the source code.
-4.  Within the folder containing the unpacked source code, build without the 
tests:
-
-    ``` pre
-    $ gradle build -Dskip.tests=true
-    ```
-
-    Or, build with the tests:
-
-    ``` pre
-    $ gradle build
-    ```
-
-5.  Verify the installation by invoking `gfsh` to print version information 
and exit.
-
-    ``` pre
-    $ cd geode-assembly\build\install\apache-geode\bin
-    $ gfsh.bat version
-    v1.0.0-incubating
-    ```
-
-## <a 
id="concept_0129F6A1D0EB42C4A3D24861AF2C5425__section_D3326496B2BB47A7AB0CFC1A5E266842"
 class="no-quick-link"></a>Install Binaries from .zip or .tar File
-
-1.  Download the .zip or .tar file from the Releases page found at 
[http://geode.incubator.apache.org](http://geode.incubator.apache.org/).
-2.  Unzip the .zip file or expand the .tar file, where `path_to_product` is an 
absolute path, and the file name will vary due to the version number. For the 
.zip format:
-
-    ``` pre
-    $ unzip apache-geode-1.0.0-incubating.zip -d path_to_product
-    ```
-
-    For the .tar format:
-
-    ``` pre
-    $ tar -xvf apache-geode-1.0.0-incubating.tar -C path_to_product
-    ```
-
-3.  Set the JAVA\_HOME environment variable. On Linux/Unix platforms:
-
-    ``` pre
-    JAVA_HOME=/usr/java/jdk1.8.0_60
-    export JAVA_HOME
-    ```
-
-    On Windows platforms:
-
-    ``` pre
-    set JAVA_HOME=c:\Program Files\Java\jdk1.8.0_60 
-    ```
-
-4.  Add the Geode scripts to your PATH environment variable. On Linux/Unix 
platforms:
-
-    ``` pre
-    PATH=$PATH:$JAVA_HOME/bin:path_to_product/bin
-    export PATH
-    ```
-
-    On Windows platforms:
-
-    ``` pre
-    set PATH=%PATH%;%JAVA_HOME%\bin;path_to_product\bin 
-    ```
-
-5.  To verify the installation, type `gfsh version` at the command line and 
note that the output lists the installed version of Geode. For example:
-
-    ``` pre
-    $ gfsh version
-    v1.0.0-incubating
-    ```
-
-    For more detailed version information such as the date of the build, build 
number and JDK version being used, invoke:
-
-    ``` pre
-    $ gfsh version --full
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/getting_started/product_intro.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/getting_started/product_intro.html.md.erb 
b/geode-docs/getting_started/product_intro.html.md.erb
deleted file mode 100644
index 471bd42..0000000
--- a/geode-docs/getting_started/product_intro.html.md.erb
+++ /dev/null
@@ -1,101 +0,0 @@
----
-title:  Main Features of Apache Geode
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-This section summarizes the main features and key functionality of Apache 
Geode.
-
--   [High Read-and-Write 
Throughput](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_CF0E3E5C4F884374B8F2F536DD2A375C)
--   [Low and Predictable 
Latency](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_9C5D669B583646F1B817284EB494DDA7)
--   [High 
Scalability](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_EF7A73D35D1241289C9CA19EDDEBE959)
--   [Continuous 
Availability](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_CEB4ABFF83054AF6A47EA2FA09C240B1)
--   [Reliable Event 
Notifications](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_86D2B8CC346349F3913209AF87648A02)
--   [Parallelized Application Behavior on Data 
Stores](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_A65B5F0DE8BF4AA6AFF16E3A75D4E0AD)
--   [Shared-Nothing Disk 
Persistence](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_97CABBFF553647F6BBBC40AA7AF6D4C7)
--   [Reduced Cost of 
Ownership](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_FCB2640F1BED4692A93F9300A41CE70D)
--   [Single-Hop Capability for 
Client/Server](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_92A444D4B422434EBD5F81D11F32C1C7)
--   [Client/Server 
Security](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_577F601BC9854AA6B53CD3440F9B9A6A)
--   [Multisite Data 
Distribution](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_091A306900D7402CAE5A46B5F9BFD612)
--   [Continuous 
Querying](product_intro.html#concept_3B5E445B19884680900161BDF25E32C9__section_FF4C3B6E26104C4D93186F6FFE22B321)
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_CF0E3E5C4F884374B8F2F536DD2A375C"
 class="no-quick-link"></a>High Read-and-Write Throughput
-
-Geode uses concurrent main-memory data structures and a highly optimized 
distribution infrastructure to provide read-and-write throughput. Applications 
can make copies of data dynamically in memory through synchronous or 
asynchronous replication for high read throughput or partition the data across 
many Geode system members to achieve high read-and-write throughput. Data 
partitioning doubles the aggregate throughput if the data access is fairly 
balanced across the entire data set. Linear increase in throughput is limited 
only by the backbone network capacity.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_9C5D669B583646F1B817284EB494DDA7"
 class="no-quick-link"></a>Low and Predictable Latency
-
-Geode's optimized caching layer minimizes context switches between threads and 
processes. It manages data in highly concurrent structures to minimize 
contention points. Communication to peer members is synchronous if the 
receivers can keep up, which keeps the latency for data distribution to a 
minimum. Servers manage object graphs in serialized form to reduce the strain 
on the garbage collector.
-
-Geode partitions subscription management (interest registration and continuous 
queries) across server data stores, ensuring that a subscription is processed 
only once for all interested clients. The resulting improvements in CPU use and 
bandwidth utilization improve throughput and reduce latency for client 
subscriptions.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_EF7A73D35D1241289C9CA19EDDEBE959"
 class="no-quick-link"></a>High Scalability
-
-Geode achieves scalability through dynamic partitioning of data across many 
members and spreading the data load uniformly across the servers. For "hot" 
data, you can configure the system to expand dynamically to create more copies 
of the data. You can also provision application behavior to run in a 
distributed manner in close proximity to the data it needs.
-
-If you need to support high and unpredictable bursts of concurrent client 
load, you can increase the number of servers managing the data and distribute 
the data and behavior across them to provide uniform and predictable response 
times. Clients are continuously load balanced to the server farm based on 
continuous feedback from the servers on their load conditions. With data 
partitioned and replicated across servers, clients can dynamically move to 
different servers to uniformly load the servers and deliver the best response 
times.
-
-You can also improve scalability by implementing asynchronous "write behind" 
of data changes to external data stores, like a database. Geode avoids a 
bottleneck by queuing all updates in order and redundantly. You can also 
conflate updates and propagate them in batch to the database.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_CEB4ABFF83054AF6A47EA2FA09C240B1"
 class="no-quick-link"></a>Continuous Availability
-
-In addition to guaranteed consistent copies of data in memory, applications 
can persist data to disk on one or more Geode members synchronously or 
asynchronously by using Geode's "shared nothing disk architecture." All 
asynchronous events (store-forward events) are redundantly managed in at least 
two members such that if one server fails, the redundant one takes over. All 
clients connect to logical servers, and the client fails over automatically to 
alternate servers in a group during failures or when servers become 
unresponsive.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_86D2B8CC346349F3913209AF87648A02"
 class="no-quick-link"></a>Reliable Event Notifications
-
-Publish/subscribe systems offer a data-distribution service where new events 
are published into the system and routed to all interested subscribers in a 
reliable manner. Traditional messaging platforms focus on message delivery, but 
often the receiving applications need access to related data before they can 
process the event. This requires them to access a standard database when the 
event is delivered, limiting the subscriber by the speed of the database.
-
-Geode offers data and events through a single system. Data is managed as 
objects in one or more distributed data regions, similar to tables in a 
database. Applications simply insert, update, or delete objects in data 
regions, and the platform delivers the object changes to the subscribers. The 
subscriber receiving the event has direct access to the related data in local 
memory or can fetch the data from one of the other members through a single hop.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_A65B5F0DE8BF4AA6AFF16E3A75D4E0AD"
 class="no-quick-link"></a>Parallelized Application Behavior on Data Stores
-
-You can execute application business logic in parallel on the Geode members. 
Geode's data-aware function-execution service permits execution of arbitrary, 
data-dependent application functions on the members where the data is 
partitioned for locality of reference and scale.
-
-By colocating the relevant data and parallelizing the calculation, you 
increase overall throughput. The calculation latency is inversely proportional 
to the number of members on which it can be parallelized.
-
-The fundamental premise is to route the function transparently to the 
application that carries the data subset required by the function and to avoid 
moving data around on the network. Application function can be executed on only 
one member, in parallel on a subset of members, or in parallel across all 
members. This programming model is similar to the popular Map-Reduce model from 
Google. Data-aware function routing is most appropriate for applications that 
require iteration over multiple data items (such as a query or custom 
aggregation function).
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_97CABBFF553647F6BBBC40AA7AF6D4C7"
 class="no-quick-link"></a>Shared-Nothing Disk Persistence
-
-Each Geode system member manages data on disk files independent of other 
members. Failures in disks or cache failures in one member do not affect the 
ability of another cache instance to operate safely on its disk files. This 
"shared nothing" persistence architecture allows applications to be configured 
such that different classes of data are persisted on different members across 
the system, dramatically increasing the overall throughput of the application 
even when disk persistence is configured for application objects.
-
-Unlike a traditional database system, Geode does not manage data and 
transaction logs in separate files. All data updates are appended to files that 
are similar to transactional logs of traditional databases. You can avoid 
disk-seek times if the disk is not concurrently used by other processes, and 
the only cost incurred is the rotational latency.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_FCB2640F1BED4692A93F9300A41CE70D"
 class="no-quick-link"></a>Reduced Cost of Ownership
-
-You can configure caching in tiers. The client application process can host a 
cache locally (in memory and overflow to disk) and delegate to a cache server 
farm on misses. Even a 30 percent hit ratio on the local cache translates to 
significant savings in costs. The total cost associated with every single 
transaction comes from the CPU cycles spent, the network cost, the access to 
the database, and intangible costs associated with database maintenance. By 
managing the data as application objects, you avoid the additional cost (CPU 
cycles) associated with mapping SQL rows to objects.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_92A444D4B422434EBD5F81D11F32C1C7"
 class="no-quick-link"></a>Single-Hop Capability for Client/Server
-
-Clients can send individual data requests directly to the server holding the 
data key, avoiding multiple hops to locate data that is partitioned. Metadata 
in the client identifies the correct server. This feature improves performance 
and client access to partitioned regions in the server tier.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_577F601BC9854AA6B53CD3440F9B9A6A"
 class="no-quick-link"></a>Client/Server Security
-
-Geode supports running multiple, distinct users in client applications. This 
feature accommodates installations in which Geode clients are embedded in 
application servers and each application server supports data requests from 
many users. Each user may be authorized to access a small subset of data on the 
servers, as in a customer application where each customer can access only their 
own orders and shipments. Each user in the client connects to the server with 
its own set of credentials and has its own access authorization to the server 
cache.
-
-Client/server communication has increased security against replay attacks. The 
server sends the client a unique, random identifier with each response to be 
used in the next client request. Because of the identifier, even a repeated 
client operation call is sent as a unique request to the server.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_091A306900D7402CAE5A46B5F9BFD612"
 class="no-quick-link"></a>Multisite Data Distribution
-
-Scalability problems can result from data sites being spread out 
geographically across a wide-area network (WAN). GemFire offers a model to 
address these topologies, ranging from a single peer-to-peer cluster to 
reliable communications between data centers across the WAN. This model allows 
distributed systems to scale out in an unbounded and loosely coupled fashion 
without loss of performance, reliability or data consistency.
-
-At the core of this architecture is the gateway sender configuration used for 
distributing region events to a remote site. You can deploy gateway sender 
instances in parallel, which enables GemFire to increase the throughput for 
distributing region events across the WAN. You can also configure gateway 
sender queues for persistence and high availability to avoid data loss in the 
case of a member failure.
-
-## <a 
id="concept_3B5E445B19884680900161BDF25E32C9__section_FF4C3B6E26104C4D93186F6FFE22B321"
 class="no-quick-link"></a>Continuous Querying
-
-In messaging systems like Java Message Service, clients subscribe to topics 
and queues. Any message delivered to a topic is sent to the subscriber. Geode 
allows continuous querying by having applications express complex interest 
using Object Query Language.

Reply via email to