http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb 
b/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb
new file mode 100644
index 0000000..89e7178
--- /dev/null
+++ b/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb
@@ -0,0 +1,79 @@
+---
+title:  Disk Store File Names and Extensions
+---
+
+Disk store files include store management files, access control files, and the 
operation log, or oplog, files, consisting of one file for deletions and 
another for all other operations.
+
+<a 
id="file_names_and_extensions__section_AE90870A7BDB425B93111D1A6E166874"></a>
+The next tables describe file names and extensions; they are followed by 
example disk store files.
+
+## <a id="file_names_and_extensions__section_C99ABFDB1AEA4FE4B38F5D4F1D612F71" 
class="no-quick-link"></a>File Names
+
+File names have three parts:
+
+**First Part of File Name: Usage Identifier**
+
+| Values   | Used for                                                          
     | Examples                                   |
+|----------|------------------------------------------------------------------------|--------------------------------------------|
+| OVERFLOW | Oplog data from overflow regions and queues only.                 
     | OVERFLOWoverflowDS1\_1.crf                 |
+| BACKUP   | Oplog data from persistent and persistent+overflow regions and 
queues. | BACKUPoverflowDS1.if, BACKUPDEFAULT.if     |
+| DRLK\_IF | Access control - locking the disk store.                          
     | DRLK\_IFoverflowDS1.lk, DRLK\_IFDEFAULT.lk |
+
+**Second Part of File Name: Disk Store Name**
+
+| Values                  | Used for                                           
                                                                       | 
Examples                                                                        
     |
+|-------------------------|---------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
+| &lt;disk store name&gt; | Non-default disk stores.                           
                                                                       | 
name="overflowDS1" DRLK\_IFoverflowDS1.lk, name="persistDS1" 
BACKUPpersistDS1\_1.crf |
+| DEFAULT                 | Default disk store name, used when persistence or 
overflow are specified on a region or queue but no disk store is named. | 
DRLK\_IFDEFAULT.lk, BACKUPDEFAULT\_1.crf                                        
     |
+
+**Third Part of File Name: oplog Sequence Number**
+
+| Values                            | Used for                                 
       | Examples                                                               
      |
+|-----------------------------------|-------------------------------------------------|------------------------------------------------------------------------------|
+| Sequence number in the format \_n | Oplog data files only. Numbering starts 
with 1. | OVERFLOWoverflowDS1\_1.crf, BACKUPpersistDS1\_2.crf, 
BACKUPpersistDS1\_3.crf |
+
+## <a id="file_names_and_extensions__section_4FC89D10D6304088882B2E278A889A9B" 
class="no-quick-link"></a>File Extensions
+
+| File extension | Used for                                         | Notes    
                                                                                
            |
+|----------------|--------------------------------------------------|------------------------------------------------------------------------------------------------------|
+| if             | Disk store metadata                              | Stored 
in the first disk-dir listed for the store. Negligible size - not considered in 
size control. |
+| lk             | Disk store access control                        | Stored 
in the first disk-dir listed for the store. Negligible size - not considered in 
size control. |
+| crf            | Oplog: create, update, and invalidate operations | 
Pre-allocated 90% of the total max-oplog-size at creation.                      
                     |
+| drf            | Oplog: delete operations                         | 
Pre-allocated 10% of the total max-oplog-size at creation.                      
                     |
+| krf            | Oplog: key and crf offset information            | Created 
after the oplog has reached the max-oplog-size. Used to improve performance at 
startup.      |
+
+Example files for disk stores persistDS1 and overflowDS1:
+
+``` pre
+bash-2.05$ ls -tlra persistData1/
+total 8
+-rw-rw-r--   1 person users        188 Mar  4 06:17 BACKUPpersistDS1.if
+drwxrwxr-x   2 person users        512 Mar  4 06:17 .
+-rw-rw-r--   1 person users          0 Mar  4 06:18 BACKUPpersistDS1_1.drf
+-rw-rw-r--   1 person users         38 Mar  4 06:18 BACKUPpersistDS1_1.crf
+drwxrwxr-x   8 person users        512 Mar  4 06:20 ..
+bash-2.05$
+ 
+bash-2.05$ ls -ltra overflowData1/
+total 1028
+drwxrwxr-x   8 person users        512 Mar  4 06:20 ..
+-rw-rw-r--   1 person users          0 Mar  4 06:21 DRLK_IFoverflowDS1.lk
+-rw-rw-r--   1 person users          0 Mar  4 06:21 BACKUPoverflowDS1.if
+-rw-rw-r--   1 person users 1073741824 Mar  4 06:21 OVERFLOWoverflowDS1_1.crf
+drwxrwxr-x   2 person users        512 Mar  4 06:21 .
+```
+
+Example default disk store files for a persistent region:
+
+``` pre
+bash-2.05$ ls -tlra
+total 106
+drwxrwxr-x   8 person users       1024 Mar  8 14:51 ..
+-rw-rw-r--   1 person users       1010 Mar  8 15:01 defTest.xml
+drwxrwxr-x   2 person users        512 Mar  8 15:01 backupDirectory
+-rw-rw-r--   1 person users          0 Mar  8 15:01 DRLK_IFDEFAULT.lk
+-rw-rw-r--   1 person users  107374183 Mar  8 15:01 BACKUPDEFAULT_1.drf
+-rw-rw-r--   1 person users  966367641 Mar  8 15:01 BACKUPDEFAULT_1.crf
+-rw-rw-r--   1 person users        172 Mar  8 15:01 BACKUPDEFAULT.if
+drwxrwxr-x   3 person users        512 Mar  8 15:01 .           
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb
new file mode 100644
index 0000000..c68441f
--- /dev/null
+++ b/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb
@@ -0,0 +1,55 @@
+---
+title:  Handling Missing Disk Stores
+---
+
+<a 
id="handling_missing_disk_stores__section_9345819FC27E41FB94F5E54979B7C506"></a>
+This section applies to disk stores that hold the latest copy of your data for 
at least one region.
+
+## <a 
id="handling_missing_disk_stores__section_9E8FBB7935F34239AD5E65A3E857EEAA" 
class="no-quick-link"></a>Show Missing Disk Stores
+
+Using `gfsh`, the `show missing-disk-stores` command lists all disk stores 
with most recent data that are being waited on by other members.
+
+For replicated regions, this command only lists missing members that are 
preventing other members from starting up. For partitioned regions, this 
command also lists any offline data stores, even when other data stores for the 
region are online, because their offline status may be causing 
`PartitionOfflineExceptions` in cache operations or preventing the system from 
satisfying redundancy.
+
+Example:
+
+``` pre
+gfsh>show missing-disk-stores
+          Disk Store ID              |   Host    |               Directory     
                                      
+------------------------------------ | --------- | 
-------------------------------------
+60399215-532b-406f-b81f-9b5bd8d1b55a | excalibur | 
/usr/local/gemfire/deploy/disk_store1
+```
+
+**Note:**
+You need to be connected to JMX Manager in `gfsh` to run this command.
+
+**Note:**
+The disk store directories listed for missing disk stores may not be the 
directories you have currently configured for the member. The list is retrieved 
from the other running members—the ones who are reporting the missing member. 
They have information from the last time the missing disk store was online. If 
you move your files and change the member’s configuration, these directory 
locations will be stale.
+
+Disk stores usually go missing because their member fails to start. The member 
can fail to start for a number of reasons, including:
+
+-   Disk store file corruption. You can check on this by validating the disk 
store.
+-   Incorrect distributed system configuration for the member
+-   Network partitioning
+-   Drive failure
+
+## <a 
id="handling_missing_disk_stores__section_FDF161F935054AB190D9DB0D7930CEAA" 
class="no-quick-link"></a>Revoke Missing Disk Stores
+
+This section applies to disk stores for which both of the following are true:
+
+-   Disk stores that have the most recent copy of data for one or more regions 
or region buckets.
+-   Disk stores that are unrecoverable, such as when you have deleted them, or 
their files are corrupted or on a disk that has had a catastrophic failure.
+
+When you cannot bring the latest persisted copy online, use the revoke command 
to tell the other members to stop waiting for it. Once the store is revoked, 
the system finds the remaining most recent copy of data and uses that.
+
+**Note:**
+Once revoked, a disk store cannot be reintroduced into the system.
+
+Use gfsh show missing-disk-stores to properly identify the disk store you need 
to revoke. The revoke command takes the disk store ID as input, as listed by 
that command.
+
+Example:
+
+``` pre
+gfsh>revoke missing-disk-store --id=60399215-532b-406f-b81f-9b5bd8d1b55a
+Missing disk store successfully revoked
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb 
b/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb
new file mode 100644
index 0000000..902a310
--- /dev/null
+++ b/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb
@@ -0,0 +1,43 @@
+---
+title:  How Disk Stores Work
+---
+
+Overflow and persistence use disk stores individually or together to store 
data.
+
+<a id="how_disk_stores_work__section_1A93EFBE3E514918833592C17CFC4C40"></a>
+Disk storage is available for these items:
+
+-   **Regions**. Persist and/or overflow data from regions.
+-   **Server’s client subscription queues**. Overflow the messaging queues 
to control memory use.
+-   **Gateway sender queues**. Persist these for high availability. These 
queues always overflow.
+-   **PDX serialization metadata**. Persist metadata about objects you 
serialize using Geode PDX serialization.
+
+Each member has its own set of disk stores, and they are completely separate 
from the disk stores of any other member. For each disk store, define where and 
how the data is stored to disk. You can store data from multiple regions and 
queues in a single disk store.
+
+This figure shows a member with disk stores D through R defined. The member 
has two persistent regions using disk store D and an overflow region and an 
overflow queue using disk store R.
+
+<img src="../../images/diskStores-1.gif" 
id="how_disk_stores_work__image_CB7972998C4A40B2A02550B97A723536" class="image" 
/>
+
+## <a id="how_disk_stores_work__section_433EEEA1560D40DD9842200181EB1D0A" 
class="no-quick-link"></a>What Geode Writes to the Disk Store
+
+This list describes the items that Geode comprise the disk store:
+
+-   The members that host the store, and information on their status, such as 
which members are online and which members are offline and time stamps.
+-   A disk store identifier.
+-   Which regions are in the disk store, specified by region name.
+-   Colocated regions that the regions in the disk store are dependent upon.
+-   A set of files that specify all keys for the regions, as well as all 
operations on the regions. Given both keys and operations, a region can be 
recreated when a member is restarted.
+
+Geode does not write indexes to disk.
+
+## <a id="how_disk_stores_work__section_C1A047CD5518499D94A0E9A0328F6DB8" 
class="no-quick-link"></a>Disk Store State
+
+The files for a disk store are used by Geode as a group. Treat them as a 
single entity. If you copy them, copy them all together. Do not change the file 
names.
+
+Disk store access and management differs according to whether the member is 
online or offline.
+
+While a member is running, its disk stores are online. When the member exits 
and is not running, its disk stores are offline.
+
+-   Online, a disk store is owned and managed by its member process. To run 
operations on an online disk store, use API calls in the member process, or use 
the `gfsh` command-line interface.
+-   Offline, the disk store is just a collection of files in the host file 
system. The files are accessible based on file system permissions. You can copy 
the files for backup or to move the member’s disk store location. You can 
also run some maintenance operations, such as file compaction and validation, 
by using the `gfsh` command-line interface. When offline, the disk store's 
information is unavailable to the distributed system. For partitioned regions, 
region data is split between multiple members, and therefore the start up of a 
member is dependent on and must wait for all members to be online. An attempt 
to access an entry that is stored on disk by an offline member results in a 
`PartitionOfflineException`.
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
 
b/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
new file mode 100644
index 0000000..279284c
--- /dev/null
+++ 
b/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
@@ -0,0 +1,48 @@
+---
+title:  Keeping a Disk Store Synchronized with the Cache
+---
+
+<a 
id="syncing_offline_disk_store__section_7D01550D750E48289EFBA9BBDB5A334E"></a>
+You can take several actions to optimize disk store use and data loading at 
startup.
+
+## <a 
id="syncing_offline_disk_store__section_7B95B20F07BD40699CDB7F3D6A93B905" 
class="no-quick-link"></a>Change Region Configuration
+
+When your disk store is offline, you can keep the configuration for its 
regions up-to-date with your `cache.xml` and API settings. The disk store 
retains region capacity and load settings, including entry map settings 
(initial capacity, concurrency level, load factor), LRU eviction settings, and 
the statistics enabled boolean. If the configurations do not match at startup, 
the `cache.xml` and API override any disk store settings and the disk store is 
automatically updated to match. So you do not need to modify your disk store to 
keep your cache configuration and disk store synchronized, but you will save 
startup time and memory if you do.
+
+For example, to change the initial capacity of the disk store:
+
+``` pre
+gfsh>alter disk-store --name=myDiskStoreName --region=partitioned_region 
+--disk-dirs=/firstDiskStoreDir,/secondDiskStoreDir,/thirdDiskStoreDir 
+--initialCapacity=20
+```
+
+To list all modifiable settings and their current values for a region, run the 
command with no actions specified:
+
+``` pre
+gfsh>alter disk-store --name=myDiskStoreName --region=partitioned_region
+--disk-dirs=/firstDiskStoreDir,/secondDiskStoreDir,/thirdDiskStoreDir  
+```
+
+## <a 
id="syncing_offline_disk_store__section_0CA17ED106394686A1A5B30601758DA6" 
class="no-quick-link"></a>Take a Region Out of Your Cache Configuration and 
Disk Store
+
+You might remove a region from your application if you decide to rename it or 
to split its data into two entirely different regions. Any significant data 
restructuring can cause you to retire some data regions.
+
+This applies to the removal of regions while the disk store is offline. 
Regions you destroy through API calls or by `gfsh` are automatically removed 
from the disk store of online members.
+
+In your application development, when you discontinue use of a persistent 
region, remove the region from the member’s disk store as well.
+
+**Note:**
+Perform the following operations with caution. You are permanently removing 
data.
+
+You can remove the region from the disk store in one of two ways:
+
+-   Delete the entire set of disk store files. Your member will initialize 
with an empty set of files the next time you and start it. Exercise caution 
when removing the files from the file system, as more than one region can be 
specified to use the same disk store directories.
+-   Selectively remove the discontinued region from the disk store with a 
command such as:
+
+    ``` pre
+    gfsh>alter disk-store --name=myDiskStoreName --region=partitioned_region
+    --disk-dirs=/firstDiskStoreDir,/secondDiskStoreDir,/thirdDiskStoreDir 
--remove
+    ```
+
+To guard against unintended data loss, Geode maintains the region in the disk 
store until you manually remove it. Regions in the disk stores that are not 
associated with any region in your application are still loaded into temporary 
regions in memory and kept there for the life of the member. The system has no 
way of detecting whether the cache region will be created by your API at some 
point, so it keeps the temporary region loaded and available.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb 
b/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb
new file mode 100644
index 0000000..872cb5a
--- /dev/null
+++ b/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb
@@ -0,0 +1,27 @@
+---
+title:  Altering When Buffers Are Flushed to Disk
+---
+
+You can configure Geode to write immediately to disk and you may be able to 
modify your operating system behavior to perform buffer flushes more frequently.
+
+Typically, Geode writes disk data into the operating system's disk buffers and 
the operating system periodically flushes the buffers to disk. Increasing the 
frequency of writes to disk decreases the likelihood of data loss from 
application or machine crashes, but it impacts performance. Your other option, 
which may give you better performance, is to use Geode's in-memory data 
backups. Do this by storing your data in multiple replicated regions or in 
partitioned regions that are configured with redundant copies. See [Region 
Types](../../developing/region_options/region_types.html#region_types).
+
+## <a id="disk_buffer_flushes__section_448348BD28B14F478D81CC2EDC6C7049" 
class="no-quick-link"></a>Modifying Disk Flushes for the Operating System
+
+You may be able to change the operating system settings for periodic flushes. 
You may also be able to perform explicit disk flushes from your application 
code. For information on these options, see your operating system's 
documentation. For example, in Linux you can change the disk flush interval by 
modifying the setting `/proc/sys/vm/dirty_expire_centiseconds`. It defaults to 
30 seconds. To alter this setting, see the Linux documentation for 
`dirty_expire_centiseconds`.
+
+## <a id="disk_buffer_flushes__section_D1068505581A43EE8395DBE97297C60F" 
class="no-quick-link"></a>Modifying Geode to Flush Buffers on Disk Writes
+
+You can have Geode flush the disk buffers on every disk write. Do this by 
setting the system property `gemfire.syncWrites` to true at the command line 
when you start your Geode member. You can only modify this setting when you 
start a member. When this is set, Geode uses a Java `RandomAccessFile` with the 
flags "rwd", which causes every file update to be written synchronously to the 
storage device. This only guarantees your data if your disk stores are on a 
local device. See the Java documentation for `java.IO.RandomAccessFile`.
+
+To modify the setting for a Geode application, add this to the java command 
line when you start the member:
+
+``` pre
+-Dgemfire.syncWrites=true
+```
+
+To modify the setting for a cache server, use this syntax:
+
+``` pre
+gfsh>start server --name=... --J=-Dgemfire.syncWrites=true
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb
new file mode 100644
index 0000000..cda4981
--- /dev/null
+++ b/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb
@@ -0,0 +1,25 @@
+---
+title:  Disk Store Management
+---
+
+The `gfsh` command-line tool has a number of options for examining and 
managing your disk stores. The `gfsh` tool, the `cache.xml` file and the 
DiskStore APIs are your management tools for online and offline disk stores.
+
+See [Disk Store 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA)
 for a list of available commands.
+
+-   **[Disk Store Management Commands and 
Operations](../../managing/disk_storage/managing_disk_stores_cmds.html)**
+
+-   **[Validating a Disk 
Store](../../managing/disk_storage/validating_disk_store.html)**
+
+-   **[Running Compaction on Disk Store Log 
Files](../../managing/disk_storage/compacting_disk_stores.html)**
+
+-   **[Keeping a Disk Store Synchronized with the 
Cache](../../managing/disk_storage/keeping_offline_disk_store_in_sync.html)**
+
+-   **[Configuring Disk Free Space 
Monitoring](../../managing/disk_storage/disk_free_space_monitoring.html)**
+
+-   **[Handling Missing Disk 
Stores](../../managing/disk_storage/handling_missing_disk_stores.html)**
+
+-   **[Altering When Buffers Are Flushed to 
Disk](../../managing/disk_storage/managing_disk_buffer_flushes.html)**
+
+    You can configure Geode to write immediately to disk and you may be able 
to modify your operating system behavior to perform buffer flushes more 
frequently.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb 
b/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb
new file mode 100644
index 0000000..3d7ca92
--- /dev/null
+++ b/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb
@@ -0,0 +1,45 @@
+---
+title:  Disk Store Management Commands and Operations
+---
+
+<a 
id="concept_8E6C4AD311674880941DA0F224A7BF39__section_4AFD4B9EECDA448BA5235FB1C32A48F1"></a>
+You can manage your disk stores using the gfsh command-line tool. For more 
information on `gfsh` commands, see [gfsh (Geode 
SHell)](../../tools_modules/gfsh/chapter_overview.html) and [Disk Store 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).
+
+**Note:**
+Each of these commands operates either on the online disk stores or offline 
disk stores, but not both.
+
+| gfsh Command                  | Online or Offline Command | See . . .        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                          |
+|-------------------------------|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `alter disk-store`            | Off                       | [Keeping a Disk 
Store Synchronized with the 
Cache](keeping_offline_disk_store_in_sync.html#syncing_offline_disk_store)      
                                                                                
                                                                                
                                                                                
                                                                                
                                                       |
+| `compact disk-store`          | On                        | [Running 
Compaction on Disk Store Log 
Files](compacting_disk_stores.html#compacting_disk_stores)                      
                                                                                
                                                                                
                                                                                
                                                                                
                                                             |
+| `backup disk-store`           | On                        | [Creating 
Backups for System Recovery and Operational 
Management](backup_restore_disk_store.html#backup_restore_disk_store) |
+| `compact offline-disk-store`  | Off                       | [Running 
Compaction on Disk Store Log 
Files](compacting_disk_stores.html#compacting_disk_stores)                      
                                                                                
                                                                                
                                                                                
                                                                                
                                                             |
+| `export offline-disk-store`   | Off                       | [Creating 
Backups for System Recovery and Operational 
Management](backup_restore_disk_store.html#backup_restore_disk_store) |
+| `revoke missing-disk-store`   | On                        | [Handling 
Missing Disk 
Stores](handling_missing_disk_stores.html#handling_missing_disk_stores)         
                                                                                
                                                                                
                                                                                
                                                                                
                                                                            |
+| `show missing-disk-stores`    | On                        | [Handling 
Missing Disk 
Stores](handling_missing_disk_stores.html#handling_missing_disk_stores)         
                                                                                
                                                                                
                                                                                
                                                                                
                                                                            |
+| `shutdown`                    | On                        | [Start Up and 
Shut Down with Disk Stores](starting_system_with_disk_stores.html)              
                                                                                
                                                                                
                                                                                
                                               |
+| `validate offline disk-store` | Off                       | [Validating a 
Disk Store](validating_disk_store.html#validating_disk_store)                   
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
     |
+
+For complete command syntax of any gfsh command, run `help                     
<command>` at the gfsh command line.
+
+## <a 
id="concept_8E6C4AD311674880941DA0F224A7BF39__section_885D2FD6C4D94664BE1DEE032153B819"
 class="no-quick-link"></a>Online Disk Store Operations
+
+For online operations, `gfsh` must be connected to a distributed system via a 
JMX manager and sends the operation requests to the members that have disk 
stores. These commands will not run on offline disk stores.
+
+## <a 
id="concept_8E6C4AD311674880941DA0F224A7BF39__section_5B001E58091D4CC1B83CFF9B895C7DA2"
 class="no-quick-link"></a>Offline Disk Store Operations
+
+For offline operations, `gfsh` runs the command against the specified disk 
store and its specified directories. You must specify all directories for the 
disk store. For example:
+
+``` pre
+gfsh>compact offline-disk-store --name=mydiskstore --disk-dirs=MyDirs 
+```
+
+Offline operations will not run on online disk stores. The tool locks the disk 
store while it is running, so the member cannot start in the middle of an 
operation.
+
+If you try to run an offline command for an online disk store, you get a 
message like this:
+
+``` pre
+gfsh>compact offline-disk-store --name=DEFAULT --disk-dirs=s1
+This disk store is in use by another process. "compact disk-store" can 
+be used to compact a disk store that is currently in use.
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/operation_logs.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/operation_logs.html.md.erb 
b/geode-docs/managing/disk_storage/operation_logs.html.md.erb
new file mode 100644
index 0000000..f9ca4f8
--- /dev/null
+++ b/geode-docs/managing/disk_storage/operation_logs.html.md.erb
@@ -0,0 +1,52 @@
+---
+title:  Disk Store Operation Logs
+---
+
+At creation, each operation log is initialized at the disk store's 
`max-oplog-size`, with the size divided between the `crf` and `drf` files. When 
the oplog is closed, Apache Geode shrinks the files to the space used in each 
file.
+
+<a id="operation_logs__section_C0B1391492394A908577C29772902A42"></a>
+After the oplog is closed, Geode also attempts to create a `krf` file, which 
contains the key names as well as the offset for the value within the `crf` 
file. Although this file is not required for startup, if it is available, it 
will improve startup performance by allowing Geode to load the entry values in 
the background after the entry keys are loaded.
+
+When an operation log is full, Geode automatically closes it and creates a new 
log with the next sequence number. This is called *oplog rolling*. You can also 
request an oplog rolling through the API call `DiskStore.forceRoll`. You may 
want to do this immediately before compacting your disk stores, so the latest 
oplog is available for compaction.
+
+**Note:**
+Log compaction can change the names of the disk store files. File number 
sequencing is usually altered, with some existing logs removed or replaced by 
newer logs with higher numbering. Geode always starts a new log at a number 
higher than any existing number.
+
+This example listing shows the logs in a system with only one disk directory 
specified for the store. The first log (`BACKUPCacheOverflow_1.crf` and 
`BACKUPCacheOverflow_1.drf`) has been closed and the system is writing to the 
second log.
+
+``` pre
+bash-2.05$ ls -tlra 
+total 55180
+drwxrwxr-x   7 person users        512 Mar 22 13:56 ..
+-rw-rw-r--   1 person users          0 Mar 22 13:57 BACKUPCacheOverflow_2.drf
+-rw-rw-r--   1 person users     426549 Mar 22 13:57 BACKUPCacheOverflow_2.crf
+-rw-rw-r--   1 person users          0 Mar 22 13:57 BACKUPCacheOverflow_1.drf
+-rw-rw-r--   1 person users     936558 Mar 22 13:57 BACKUPCacheOverflow_1.crf
+-rw-rw-r--   1 person users       1924 Mar 22 13:57 BACKUPCacheOverflow.if
+drwxrwxr-x   2 person users       2560 Mar 22 13:57 .
+```
+
+The system rotates through all available disk directories to write its logs. 
The next log is always started in a directory that has not reached its 
configured capacity, if one exists.
+
+## <a id="operation_logs__section_8431984F4E6644D79292850CCA60E6E3" 
class="no-quick-link"></a>When Disk Store Oplogs Reach the Configured Disk 
Capacity
+
+If no directory exists that is within its capacity limits, how Geode handles 
this depends on whether automatic compaction is enabled.
+
+-   If auto-compaction is enabled, Geode creates a new oplog in one of the 
directories, going over the limit, and logs a warning that reports:
+
+    ``` pre
+    Even though the configured directory size limit has been exceeded a 
+    new oplog will be created. The current limit is of XXX. The current 
+    space used in the directory is YYY.
+    ```
+
+    **Note:**
+    When auto-compaction is enabled, `dir-size` does not limit how much disk 
space is used. Geode will perform auto-compaction, which should free space, but 
the system may go over the configured disk limits.
+
+-   If auto-compaction is disabled, Geode does not create a new oplog, 
operations in the regions attached to the disk store block, and Geode logs this 
error:
+
+    ``` pre
+    Disk is full and rolling is disabled. No space can be created
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
 
b/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
new file mode 100644
index 0000000..5a0b60e
--- /dev/null
+++ 
b/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Optimizing a System with Disk Stores
+---
+
+Optimize availability and performance by following the guidelines in this 
section.
+
+1.  Apache Geode recommends the use of `ext4` filesystems when operating on 
Linux or Solaris platforms. The `ext4` filesystem supports preallocation, which 
benefits disk startup performance. If you are using `ext3` filesystems in 
latency-sensitive environments with high write throughput, you can improve disk 
startup performance by setting the `maxOplogSize` (see the 
`DiskStoreFactory.setMaxOplogSize`) to a value lower than the default 1 GB and 
by disabling preallocation by specifying the system property 
`gemfire.preAllocateDisk=false` upon Geode process startup.
+2.  When you start your system, start all the members that have persistent 
regions at roughly the same time. Create and use startup scripts for 
consistency and completeness.
+3.  Shut down your system using the gfsh `shutdown` command. This is an 
ordered shutdown that positions your disk stores for a faster startup.
+4.  Configure critical usage thresholds (`disk-usage-warning-percentage` and 
`disk-usage-critical-percentage`) for the disk. By default, these are set to 
80% for warning and 99% for errors that will shut down the cache.
+5.  Decide on a file compaction policy and, if needed, develop procedures to 
monitor your files and execute regular compaction.
+6.  Decide on a backup strategy for your disk stores and follow it. You can 
back up a running sytem by using the `backup                     disk-store` 
command.
+7.  If you remove any persistent region or change its configuration while your 
disk store is offline, consider synchronizing the regions in your disk stores.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb
new file mode 100644
index 0000000..a142ee6
--- /dev/null
+++ b/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb
@@ -0,0 +1,19 @@
+---
+title:  Configuring Disk Stores
+---
+
+In addition to the disk stores you specify, Apache Geode has a default disk 
store that it uses when disk use is configured with no disk store name 
specified. You can modify default disk store behavior.
+
+-   **[Designing and Configuring Disk 
Stores](../../managing/disk_storage/using_disk_stores.html)**
+
+    You define disk stores in your cache, then you assign them to your regions 
and queues by setting the `disk-store-name` attribute in your region and queue 
configurations.
+
+-   **[Disk Store Configuration 
Parameters](../../managing/disk_storage/disk_store_configuration_params.html)**
+
+    You define your disk stores by using the `gfsh create disk-store` command 
or in `<disk-store>` subelements of your cache declaration in `cache.xml`. All 
disk stores are available for use by all of your regions and queues.
+
+-   **[Modifying the Default Disk 
Store](../../managing/disk_storage/using_the_default_disk_store.html)**
+
+    You can modify the behavior of the default disk store by specifying the 
attributes you want for the disk store named "DEFAULT".
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb
new file mode 100644
index 0000000..25fe244
--- /dev/null
+++ 
b/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb
@@ -0,0 +1,111 @@
+---
+title:  Start Up and Shut Down with Disk Stores
+---
+
+This section describes what happens during startup and shutdown and provides 
procedures for those operations.
+
+## Start Up
+
+When you start a member with a persistent region, the data is retrieved from 
disk stores to recreate the member’s persistent region. If the member does 
not hold all of the most recent data for the region, then other members have 
the data, and region creation blocks, waiting for the those other members. A 
partitioned region with colocated entries also blocks on start up, waiting for 
the entries of the colocated region to be available. A persistent gateway 
sender is treated the same as a colocated region, so it can also block region 
creation.
+
+With a log level of info or below, the system provides messaging about the 
wait. Here, the disk store for server2 has the most recent data for the region, 
and server1 is waiting for server2.
+
+``` pre
+Region /people has potentially stale data.
+It is waiting for another member to recover the latest data.
+My persistent id:
+
+  DiskStore ID: 6893751ee74d4fbd-b4780d844e6d5ce7
+  Name: server1
+  Location: /192.0.2.0:/home/dsmith/server1/.
+
+Members with potentially new data:
+[
+  DiskStore ID: 160d415538c44ab0-9f7d97bae0a2f8de
+  Name: server2
+  Location: /192.0.2.0:/home/dsmith/server2/.
+]
+Use the "gfsh show missing-disk-stores" command to see all disk stores
+that are being waited on by other members.
+```
+
+When the most recent data is available, the system updates the region, logs a 
message, and continues the startup.
+
+``` pre
+[info 2010/04/09 10:52:13.010 PDT CacheRunner <main> tid=0x1]    
+   Done waiting for the remote data to be available.
+```
+
+If the member's disk store has data for a region that is never created, the 
data remains in the disk store.
+
+Each member’s persistent regions load and go online as quickly as possible, 
not waiting unnecessarily for other members to complete. For performance 
reasons, these actions occur asynchronously:
+
+-   Once at least one copy of each and every bucket is recovered from disk, 
the region is available. Secondary buckets will load asynchronously.
+-   Entry keys are loaded from the key file in the disk store before 
considering entry values. Once all keys are loaded, Geode loads the entry 
values asynchronously. If a value is requested before it has loaded, the value 
will immediately be fetched from the disk store.
+
+## <a 
id="starting_system_with_disk_stores__section_D0A7403707B847749A22BF9221A2C823" 
class="no-quick-link"></a>Start Up Procedure
+
+To start a system with disk stores:
+
+1.  **Start all members with persisted data first and at the same time**. 
Exactly how you do this depends on your members. Make sure to start members 
that host colocated regions, as well as persistent gateway senders.
+
+    While they are initializing their regions, the members determine which 
have the most recent region data, and initialize their regions with the most 
recent data.
+
+    For replicated regions, where you define persistence only in some of the 
region's host members, start the persistent replicate members prior to the 
non-persistent replicate members to make sure the data is recovered from disk.
+
+    This is an example bash script for starting members in parallel. The 
script waits for the startup to finish. It exits with an error status if one of 
the jobs fails.
+
+    ``` pre
+    #!/bin/bash
+    ssh servera "cd /my/directory; gfsh start server --name=servera &
+    ssh serverb "cd /my/directory; gfsh start server --name=serverb &
+
+    STATUS=0;
+    for job in `jobs -p`
+    do
+    echo $job
+    wait $job;
+    JOB_STATUS=$?;
+    test $STATUS -eq 0 && STATUS=$JOB_STATUS;
+    done
+    exit $STATUS;
+    ```
+
+2.  **Respond to blocked members**. When a member blocks waiting for more 
recent data from another member, the member waits indefinitely rather than 
coming online with stale data. Check for missing disk stores with the `gfsh 
show                             missing-disk-stores` command. See [Handling 
Missing Disk 
Stores](handling_missing_disk_stores.html#handling_missing_disk_stores).
+    -   If no disk stores are missing, the cache initialization must be slow 
for some other reason. Check the information on member hangs in [Diagnosing 
System 
Problems](../troubleshooting/diagnosing_system_probs.html#diagnosing_system_probs).
+    -   If disk stores are missing that you think should be there:
+        -   Make sure you have started the member. Check the logs for any 
failure messages. See 
[Logging](../logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
+        -   Make sure your disk store files are accessible. If you have moved 
your member or disk store files, you must update your disk store configuration 
to match.
+    -   If disk stores are missing that you know are lost, because you have 
deleted them or their files are otherwise unavailable, revoke them so the 
startup can continue.
+
+## <a 
id="starting_system_with_disk_stores__section_5E32F488EB5D4E74AAB6BF394E4329D6" 
class="no-quick-link"></a>Example Startup to Illustrate Ordering
+
+The following lists the two possibilities for starting up a replicated 
persistent region after a shutdown. Assume that Member A (MA) exits first, 
leaving persisted data on disk for RegionP. Member B (MB) continues to run 
operations on RegionP, which update its disk store and leave the disk store for 
MA in a stale condition. MB exits, leaving the most up-to-date data on disk for 
RegionP.
+
+-   Restart order 1
+    1.  MB is started first. MB identifies that it has the most recent disk 
data for RegionP and initializes the region from disk. MB does not block.
+    2.  MA is started, recovers its data from disk, and updates region data as 
needed from the data in MB.
+-   Restart order 2
+    1.  MA is started first. MA identifies that it does not have the most 
recent disk data and blocks, waiting for MB to start before recreating RegionP 
in MA.
+    2.  MB is started. MB identifies that it has the most recent disk data for 
RegionP and initializes the region from disk.
+    3.  MA recovers its RegionP data from disk and updates region data as 
needed from the data in MB.
+
+## Shutdown
+
+If more than one member hosts a persistent region or queue, the order in which 
the various members shut down may be significant upon restart of the system. 
The last member to exit the system or shut down has the most up-to-date data on 
disk. Each member knows which other system members were online at the time of 
exit or shutdown. This permits a member to acquire the most recent data upon 
subsequent start up.
+
+For a replicated region with persistence, the last member to exit has the most 
recent data.
+
+For a partitioned region every member persists its own buckets. A shutdown 
using `gfsh shutdown` will synchronize the disk stores before exiting, so all 
disk stores hold the most recent data. Without an orderly shutdown, some disk 
stores may have more recent bucket data than others.
+
+The best way to shut down a system is to invoke the `gfsh shutdown` command 
with all members running. All online data stores will be synchronized before 
shutting down, so all hold the most recent data copy. To shut down all members 
other than locators:
+
+``` pre
+gfsh>shutdown
+```
+
+To shut down all members, including locators:
+
+``` pre
+gfsh>shutdown --include-locators=true
+```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
new file mode 100644
index 0000000..a7144b9
--- /dev/null
+++ b/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
@@ -0,0 +1,199 @@
+---
+title:  Designing and Configuring Disk Stores
+---
+
+You define disk stores in your cache, then you assign them to your regions and 
queues by setting the `disk-store-name` attribute in your region and queue 
configurations.
+
+**Note:**
+Besides the disk stores you specify, Apache Geode has a default disk store 
that it uses when disk use is configured with no disk store name specified. By 
default, this disk store is saved to the application’s working directory. You 
can change its behavior, as indicated in [Create and Configure Your Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_37BC5A4D84B34DB49E489DD4141A4884)
 and [Modifying the Default Disk 
Store](using_the_default_disk_store.html#using_the_default_disk_store).
+
+-   [Design Your Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_0CD724A12EE4418587046AAD9EEC59C5)
+-   [Create and Configure Your Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_37BC5A4D84B34DB49E489DD4141A4884)
+-   [Configuring Regions, Queues, and PDX Serialization to Use the Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_AFB254CA9C5A494A8E335352A6849C16)
+-   [Configuring Disk Stores on Gateway 
Senders](using_disk_stores.html#defining_disk_stores__config-disk-store-gateway)
+
+## <a id="defining_disk_stores__section_0CD724A12EE4418587046AAD9EEC59C5" 
class="no-quick-link"></a>Design Your Disk Stores
+
+Before you begin, you should understand Geode [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
+
+1.  Work with your system designers and developers to plan for anticipated 
disk storage requirements in your testing and production caching systems. Take 
into account space and functional requirements.
+    -   For efficiency, separate data that is only overflowed in separate disk 
stores from data that is persisted or persisted and overflowed. Regions can be 
overflowed, persisted, or both. Server subscription queues are only overflowed.
+    -   When calculating your disk requirements, figure in your data 
modification patterns and your compaction strategy. Geode creates each oplog 
file at the max-oplog-size, which defaults to 1 GB. Obsolete operations are 
only removed from the oplogs during compaction, so you need enough space to 
store all operations that are done between compactions. For regions where you 
are doing a mix of updates and deletes, if you use automatic compaction, a good 
upper bound for the required disk space is
+
+        ``` pre
+        (1 / (1 - (compaction_threshold/100)) ) * data size
+        ```
+
+        where data size is the total size of all the data you store in the 
disk store. So, for the default compaction-threshold of 50, the disk space is 
roughly twice your data size. Note that the compaction thread could lag behind 
other operations, causing disk use to rise above the threshold temporarily. If 
you disable automatic compaction, the amount of disk required depends on how 
many obsolete operations accumulate between manual compactions.
+
+2.  Work with your host system administrators to determine where to place your 
disk store directories, based on your anticipated disk storage requirements and 
the available disks on your host systems.
+    -   Make sure the new storage does not interfere with other processes that 
use disk on your systems. If possible, store your files to disks that are not 
used by other processes, including virtual memory or swap space. If you have 
multiple disks available, for the best performance, place one directory on each 
disk.
+    -   Use different directories for different members. You can use any 
number of directories for a single disk store.
+
+## <a id="defining_disk_stores__section_37BC5A4D84B34DB49E489DD4141A4884" 
class="no-quick-link"></a>Create and Configure Your Disk Stores
+
+1.  In the locations you have chosen, create all directories you will specify 
for your disk stores to use. Geode throws an exception if the specified 
directories are not available when a disk store is created. You do not need to 
populate these directories with anything.
+2.  Open a `gfsh` prompt and connect to the distributed system.
+3.  At the `gfsh` prompt, create and configure a disk store:
+    -  Specify the name (`--name`) of the disk-store.
+
+        -   Choose disk store names that reflect how the stores should be used 
and that work for your operating systems. Disk store names are used in the disk 
file names:
+
+            -   Use disk store names that satisfy the file naming requirements 
for your operating system. For example, if you store your data to disk in a 
Windows system, your disk store names could not contain any of these reserved 
characters, &lt; &gt; : " / \\ | ? \*.
+
+            -   Do not use very long disk store names. The full file names 
must fit within your operating system limits. On Linux, for example, the 
standard limitation is 255 characters.
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 
+        ```
+    -  Configure the directory locations (`--dir`) and the maximum space to 
use for the store (specified after the disk directory name by \# and the 
maximum number in megabytes).
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480
+        ```
+    -  Optionally, you can configure the store’s file compaction behavior. 
In conjunction with this, plan and program for any manual compaction.  Example:
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
+        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true
+        ```
+    -  If needed, configure the maximum size (in MB) of a single oplog. When 
the current files reach this size, the system rolls forward to a new file. You 
get better performance with relatively small maximum file sizes.  Example:
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
+        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
+        --max-oplog-size=512
+        ```
+    -  If needed, modify queue management parameters for asynchronous queueing 
to the disk store. You can configure any region for synchronous or asynchronous 
queueing (region attribute `disk-synchronous`). Server queues and gateway 
sender queues always operate synchronously. When either the `queue-size` 
(number of operations) or `time-interval` (milliseconds) is reached, enqueued 
data is flushed to disk. You can also synchronously flush unwritten data to 
disk through the `DiskStore` `flushToDisk` method.  Example:
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
+        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
+        --max-oplog-size=512 --queue-size=10000 --time-interval=15
+        ```
+    -  If needed, modify the size (specified in bytes) of the buffer used for 
writing to disk.  Example:
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
+        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
+        --max-oplog-size=512 --queue-size=10000 --time-interval=15 
--write-buffer-size=65536
+        ```
+    -  If needed, modify the `disk-usage-warning-percentage` and 
`disk-usage-critical-percentage` thresholds that determine the percentage 
(default: 90%) of disk usage that will trigger a warning and the percentage 
(default: 99%) of disk usage that will generate an error and shut down the 
member cache.  Example:
+
+        ``` pre
+        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
+        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
+        --max-oplog-size=512 --queue-size=10000 --time-interval=15 
--write-buffer-size=65536 \
+        --disk-usage-warning-percentage=80 --disk-usage-critical-percentage=98
+        ```
+
+The following is the complete disk store cache.xml configuration example:
+
+``` pre
+<disk-store name="serverOverflow" compaction-threshold="40" 
+           auto-compact="false" allow-force-compaction="true"
+        max-oplog-size="512" queue-size="10000"  
+        time-interval="15" write-buffer-size="65536"
+        disk-usage-warning-percentage="80"
+        disk-usage-critical-percentage="98">
+       <disk-dirs>
+              <disk-dir>c:\overflow_data</disk-dir>
+              <disk-dir dir-size="20480">d:\overflow_data</disk-dir>
+       </disk-dirs>
+</disk-store>
+```
+
+**Note:**
+As an alternative to defining cache.xml on every server in the cluster-- if 
you have the cluster configuration service enabled, when you create a disk 
store in `gfsh`, you can share the disk store's configuration with the rest of 
cluster. See [Overview of the Cluster Configuration 
Service](../../configuring/cluster_config/gfsh_persist.html).
+
+## Modifying Disk Stores
+
+You can modify an offline disk store by using the [alter 
disk-store](../../tools_modules/gfsh/command-pages/alter.html#topic_99BCAD98BDB5470189662D2F308B68EB)
 command. If you are modifying the default disk store configuration, use 
"DEFAULT" as the disk-store name.
+
+## <a id="defining_disk_stores__section_AFB254CA9C5A494A8E335352A6849C16" 
class="no-quick-link"></a>Configuring Regions, Queues, and PDX Serialization to 
Use the Disk Stores
+
+The following are examples of using already created and named disk stores for 
Regions, Queues, and PDX Serialization.
+
+Example of using a disk store for region persistence and overflow:
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create region --name=regionName --type=PARTITION_PERSISTENT_OVERFLOW \
+    --disk-store=serverPersistOverflow
+    ```
+
+-   cache.xml
+
+    ``` pre
+    <region refid="PARTITION_PERSISTENT_OVERFLOW" 
disk-store-name="persistOverflow1"/>
+    ```
+
+Example of using a named disk store for server subscription queue overflow 
(cache.xml):
+
+``` pre
+<cache-server port="40404">
+   <client-subscription 
+      eviction-policy="entry" 
+      capacity="10000"
+      disk-store-name="queueOverflow2"/>
+</cache-server>
+```
+
+Example of using a named disk store for PDX serialization metadata (cache.xml):
+
+``` pre
+<pdx read-serialized="true" 
+    persistent="true" 
+    disk-store-name="SerializationDiskStore">
+</pdx>
+```
+
+## <a id="defining_disk_stores__config-disk-store-gateway" 
class="no-quick-link"></a>Configuring Disk Stores on Gateway Senders
+
+Gateway sender queues are always overflowed and may be persisted. Assign them 
to overflow disk stores if you do not persist, and to persistence disk stores 
if you do.
+
+Example of using a named disk store for gateway sender queue persistence:
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create gateway-sender --id=persistedSender1 
--remote-distributed-system-id=1 \
+    --enable-persistence=true --disk-store-name=diskStoreA 
--maximum-queue-memory=100  
+    ```
+
+-   cache.xml:
+
+    ``` pre
+    <cache>
+      <gateway-sender id="persistedsender1" parallel="true" 
+       remote-distributed-system-id="1"
+       enable-persistence="true"
+       disk-store-name="diskStoreA"
+       maximum-queue-memory="100"/> 
+       ... 
+    </cache>
+    ```
+
+Examples of using the default disk store for gateway sender queue persistence 
and overflow:
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create gateway-sender --id=persistedSender1 
--remote-distributed-system-id=1 \
+    --enable-persistence=true --maximum-queue-memory=100 
+    ```
+
+-   cache.xml:
+
+    ``` pre
+    <cache>
+      <gateway-sender id="persistedsender1" parallel="true" 
+       remote-distributed-system-id="1"
+       enable-persistence="true"
+       maximum-queue-memory="100"/> 
+       ... 
+    </cache>
+    ```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb 
b/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb
new file mode 100644
index 0000000..bd2bfda
--- /dev/null
+++ b/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb
@@ -0,0 +1,53 @@
+---
+title:  Modifying the Default Disk Store
+---
+
+You can modify the behavior of the default disk store by specifying the 
attributes you want for the disk store named "DEFAULT".
+
+<a 
id="using_the_default_disk_store__section_7D6E1A05D28840AC8606EF0D88E9B373"></a>
+Whenever you use disk stores without specifying the disk store to use, Geode 
uses the disk store named "DEFAULT".
+
+For example, these region and queue configurations specify persistence and/or 
overflow, but do not specify the disk-store-name. Because no disk store is 
specified, these use the disk store named "DEFAULT".
+
+Examples of using the default disk store for region persistence and overflow:
+
+-   gfsh:
+
+    ``` pre
+    gfsh>create region --name=regionName --type=PARTITION_PERSISTENT_OVERFLOW
+    ```
+
+-   cache.xml
+
+    ``` pre
+    <region refid="PARTITION_PERSISTENT_OVERFLOW"/>
+    ```
+
+Example of using the default disk store for server subscription queue overflow 
(cache.xml):
+
+``` pre
+<cache-server port="40404">
+    <client-subscription eviction-policy="entry" capacity="10000"/>
+</cache-server>
+```
+
+## <a 
id="using_the_default_disk_store__section_671AED6EAFEE485D837411DEBE0C6BC6" 
class="no-quick-link"></a>Change the Behavior of the Default Disk Store
+
+Geode initializes the default disk store with the default disk store 
configuration settings. You can modify the behavior of the default disk store 
by specifying the attributes you want for the disk store named "DEFAULT". The 
only thing you can’t change about the default disk store is the name.
+
+The following example changes the default disk store to allow manual 
compaction and to use multiple, non-default directories:
+
+cache.xml:
+
+``` pre
+<disk-store name="DEFAULT" allow-force-compaction="true">
+     <disk-dirs>
+        <disk-dir>/export/thor/customerData</disk-dir>
+        <disk-dir>/export/odin/customerData</disk-dir>
+        <disk-dir>/export/embla/customerData</disk-dir>
+     </disk-dirs>
+</disk-store>
+```
+
+<a 
id="using_the_default_disk_store__section_C61BA9AD9A6442DA934C2B20C75E0996"></a>
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb 
b/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb
new file mode 100644
index 0000000..13c9801
--- /dev/null
+++ b/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb
@@ -0,0 +1,20 @@
+---
+title:  Validating a Disk Store
+---
+
+<a id="validating_disk_store__section_1782CD93DB6040A2BF52014A6600EA44"></a>
+The `validate offline-disk-store` command verifies the health of your offline 
disk store and gives you information about the regions in it, the total 
entries, and the number of records that would be removed if you compacted the 
store.
+
+Use this command at these times:
+
+-   Before compacting an offline disk store to help decide whether it’s 
worth doing.
+-   Before restoring or modifying a disk store.
+-   Any time you want to be sure the disk store is in good shape.
+
+Example:
+
+``` pre
+gfsh>validate offline-disk-store --name=ds1 --disk-dirs=hostB/bupDirectory
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/heap_use/heap_management.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/heap_use/heap_management.html.md.erb 
b/geode-docs/managing/heap_use/heap_management.html.md.erb
new file mode 100644
index 0000000..f3b90b7
--- /dev/null
+++ b/geode-docs/managing/heap_use/heap_management.html.md.erb
@@ -0,0 +1,208 @@
+---
+title: Managing Heap and Off-heap Memory
+---
+
+By default, Apache Geode uses the JVM heap. Apache Geode also offers an option 
to store data off heap. This section describes how to manage heap and off-heap 
memory to best support your application.
+
+## <a id="section_590DA955523246ED980E4E351FF81F71" 
class="no-quick-link"></a>Tuning the JVM's Garbage Collection Parameters
+
+Because Apache Geode is specifically designed to manipulate data held in 
memory, you can optimize your application's performance by tuning the way 
Apache Geode uses the JVM heap.
+
+See your JVM documentation for all JVM-specific settings that can be used to 
improve garbage collection (GC) response. At a minimum, do the following:
+
+1.  Set the initial and maximum heap switches, `-Xms` and `-Xmx`, to the same 
values. The `gfsh start server` options `--initial-heap` and `--max-heap` 
accomplish the same purpose, with the added value of providing resource manager 
defaults such as eviction threshold and critical threshold.
+2.  Configure your JVM for concurrent mark-sweep (CMS) garbage collection.
+3.  If your JVM allows, configure it to initiate CMS collection when heap use 
is at least 10% lower than your setting for the resource manager 
`eviction-heap-percentage`. You want the collector to be working when Geode is 
evicting or the evictions will not result in more free memory. For example, if 
the `eviction-heap-percentage` is set to 65, set your garbage collection to 
start when the heap use is no higher than 55%.
+
+| JVM         | CMS switch flag           | CMS initiation (begin at heap % N) 
    |
+|-------------|---------------------------|----------------------------------------|
+| Sun HotSpot | `‑XX:+UseConcMarkSweepGC` | 
`‑XX:CMSInitiatingOccupancyFraction=N` |
+| JRockit     | `-Xgc:gencon`             | `-XXgcTrigger:N`                   
    |
+| IBM         | `-Xgcpolicy:gencon`       | N/A                                
    |
+
+For the `gfsh start server` command, pass these settings with the `--J` 
switch, for example: `‑‑J=‑XX:+UseConcMarkSweepGC`.
+
+The following is an example of setting JVM for an application:
+
+``` pre
+$ java app.MyApplication -Xms=30m -Xmx=30m -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=60
+```
+
+**Note:** Do not use the `-XX:+UseCompressedStrings` and `-XX:+UseStringCache` 
JVM configuration properties when starting up servers. These JVM options can 
cause issues with data corruption and compatibility.
+
+Or, using `gfsh`:
+
+``` pre
+$ gfsh start server --name=app.MyApplication --initial-heap=30m --max-heap=30m 
\
+--J=-XX:+UseConcMarkSweepGC --J=-XX:CMSInitiatingOccupancyFraction=60
+```
+
+## <a id="how_the_resource_manager_works" class="no-quick-link"></a>Using the 
Geode Resource Manager
+
+The Geode resource manager works with your JVM's tenured garbage collector to 
control heap use and protect your member from hangs and crashes due to memory 
overload.
+
+<a 
id="how_the_resource_manager_works__section_53E80B61991447A2915E8A754383B32D"></a>
+The Geode resource manager prevents the cache from consuming too much memory 
by evicting old data. If the garbage collector is unable to keep up, the 
resource manager refuses additions to the cache until the collector has freed 
an adequate amount of memory.
+
+The resource manager has two threshold settings, each expressed as a 
percentage of the total tenured heap. Both are disabled by default.
+
+  1.  **Eviction Threshold**. Above this, the manager orders evictions for all 
regions with `eviction-attributes` set to `lru-heap-percentage`. This prompts 
dedicated background evictions, independent of any application threads and it 
also tells all application threads adding data to the regions to evict at least 
as much data as they add. The JVM garbage collector removes the evicted data, 
reducing heap use. The evictions continue until the manager determines that 
heap use is again below the eviction threshold.
+
+    The resource manager enforces eviction thresholds only on regions whose 
LRU eviction policies are based on heap percentage. Regions whose eviction 
policies based on entry count or memory size use other mechanisms to manage 
evictions. See [Eviction](../../developing/eviction/chapter_overview.html) for 
more detail regarding eviction policies.
+
+  2.  **Critical Threshold**. Above this, all activity that might add data to 
the cache is refused. This threshold is set above the eviction threshold and is 
intended to allow the eviction and GC work to catch up. This JVM, all other 
JVMs in the distributed system, and all clients to the system receive 
`LowMemoryException` for operations that would add to this critical member's 
heap consumption. Activities that fetch or reduce data are allowed. For a list 
of refused operations, see the Javadocs for the `ResourceManager` method 
`setCriticalHeapPercentage`.
+
+    Critical threshold is enforced on all regions, regardless of LRU eviction 
policy, though it can be set to zero to disable its effect.
+
+<img src="../../images/DataManagement-9.png" 
id="how_the_resource_manager_works__image_C3568D47EE1B4F2C9F0742AE9C291BF1" 
class="image" />
+
+When heap use passes the eviction threshold in either direction, the manager 
logs an info-level message.
+
+When heap use exceeds the critical threshold, the manager logs an error-level 
message. Avoid exceeding the critical threshold. Once identified as critical, 
the Geode member becomes a read-only member that refuses cache updates for all 
of its regions, including incoming distributed updates.
+
+For more information, see `org.apache.geode.cache.control.ResourceManager` in 
the online API documentation.
+
+## <a 
id="how_the_resource_manager_works__section_EA5E52E65923486488A71E3E6F0DE9DA" 
class="no-quick-link"></a>How Background Eviction Is Performed
+
+When the manager kicks off evictions:
+
+1.  From all regions in the local cache that are configured for heap LRU 
eviction, the background eviction manager creates a randomized list containing 
one entry for each partitioned region bucket (primary or secondary) and one 
entry for each non-partitioned region. So each partitioned region bucket is 
treated the same as a single, non-partitioned region.
+
+2.  The background eviction manager starts four evictor threads for each 
processor on the local machine. The manager passes each thread its share of the 
bucket/region list. The manager divides the bucket/region list as evenly as 
possible by count, and not by memory consumption.
+
+3.  Each thread iterates round-robin over its bucket/region list, evicting one 
LRU entry per bucket/region until the resource manager sends a signal to stop 
evicting.
+
+See also [Memory Requirements for Cached 
Data](../../reference/topics/memory_requirements_for_cache_data.html#calculating_memory_requirements).
+
+## <a id="configuring_resource_manager_controlling_heap_use" 
class="no-quick-link"></a>Controlling Heap Use with the Resource Manager
+
+Resource manager behavior is closely tied to the triggering of Garbage 
Collection (GC) activities, the use of concurrent garbage collectors in the 
JVM, and the number of parallel GC threads used for concurrency.
+
+<a 
id="configuring_resource_manager__section_B47A78E7BA0048C89FBBDB7441C308BE"></a>
+The recommendations provided here for using the manager assume you have a 
solid understanding of your Java VM's heap management and garbage collection 
service.
+
+The resource manager is available for use in any Apache Geode member, but you 
may not want to activate it everywhere. For some members it might be better to 
occasionally restart after a hang or OME crash than to evict data and/or refuse 
distributed caching activities. Also, members that do not risk running past 
their memory limits would not benefit from the overhead the resource manager 
consumes. Cache servers are often configured to use the manager because they 
generally host more data and have more data activity than other members, 
requiring greater responsiveness in data cleanup and collection.
+
+For the members where you want to activate the resource manager:
+
+1.  Configure Geode for heap LRU management.
+
+2.  Set the JVM GC tuning parameters to handle heap and garbage collection in 
conjunction with the Geode manager.
+
+3.  Monitor and tune heap LRU configurations and your GC configurations.
+
+4.  Before going into production, run your system tests with application 
behavior and data loads that approximate your target systems so you can tune as 
well as possible for production needs.
+
+5.  In production, keep monitoring and tuning to meet changing needs.
+
+## <a 
id="configuring_resource_manager__section_4949882892DA46F6BB8588FA97037F45" 
class="no-quick-link"></a>Configure Geode for Heap LRU Management
+
+The configuration terms used here are `cache.xml` elements and attributes, but 
you can also configure through `gfsh` and the 
`org.apache.geode.cache.control.ResourceManager` and `Region` APIs.
+
+1.  When starting up your server, set `initial-heap` and `max-heap` to the 
same value.
+
+2.  Set the `resource-manager` `critical-heap-percentage` threshold. This 
should be as as close to 100 as possible while still low enough so the 
manager's response can prevent the member from hanging or getting 
`OutOfMemoryError`. The threshold is zero (no threshold) by default.
+
+    **Note:** When you set this threshold, it also enables a query monitoring 
feature that prevents most out-of-memory exceptions when executing queries or 
creating indexes. See [Monitoring Queries for Low 
Memory](../../developing/querying_basics/monitor_queries_for_low_memory.html#topic_685CED6DE7D0449DB8816E8ABC1A6E6F).
+
+3.  Set the `resource-manager` `eviction-heap-percentage` threshold to a value 
lower than the critical threshold. This should be as high as possible while 
still low enough to prevent your member from reaching the critical threshold. 
The threshold is zero (no threshold) by default.
+
+4.  Decide which regions will participate in heap eviction and set their 
`eviction-attributes` to `lru-heap-percentage`. See 
[Eviction](../../developing/eviction/chapter_overview.html). The regions you 
configure for eviction should have enough data activity for the evictions to be 
useful and should contain data your application can afford to delete or offload 
to disk.
+
+<a 
id="configuring_resource_manager__section_5D88064B75C643B0849BBD4345A6671B"></a>
+
+gfsh example:
+
+``` pre
+gfsh>start server --name=server1 --initial-heap=30m --max-heap=30m \
+--critical-heap-percentage=80 --eviction-heap-percentage=60
+```
+
+cache.xml example:
+
+``` pre
+<cache>
+<region refid="REPLICATE_HEAP_LRU" />
+...
+<resource-manager critical-heap-percentage="80" eviction-heap-percentage="60"/>
+</cache>
+```
+
+**Note:** The `resource-manager` specification must appear after the region 
declarations in your cache.xml file.
+
+## <a id="set_jvm_gc_tuning_params" class="no-quick-link"></a>Set the JVM GC 
Tuning Parameters
+
+If your JVM allows, configure it to initiate concurrent mark-sweep (CMS) 
garbage collection when heap use is at least 10% lower than your setting for 
the resource manager `eviction-heap-percentage`. You want the collector to be 
working when Geode is evicting or the evictions will not result in more free 
memory. For example, if the `eviction-heap-percentage` is set to 65, set your 
garbage collection to start when the heap use is no higher than 55%.
+
+## <a 
id="configuring_resource_manager__section_DE1CC494C2B547B083AA00821250972A" 
class="no-quick-link"></a>Monitor and Tune Heap LRU Configurations
+
+In tuning the resource manager, your central focus should be keeping the 
member below the critical threshold. The critical threshold is provided to 
avoid member hangs and crashes, but because of its exception-throwing behavior 
for distributed updates, the time spent in critical negatively impacts the 
entire distributed system. To stay below critical, tune so that the Geode 
eviction and the JVM's GC respond adequately when the eviction threshold is 
reached.
+
+Use the statistics provided by your JVM to make sure your memory and GC 
settings are sufficient for your needs.
+
+The Geode `ResourceManagerStats` provide information about memory use and the 
manager thresholds and eviction activities.
+
+If your application spikes above the critical threshold on a regular basis, 
try lowering the eviction threshold. If the application never goes near 
critical, you might raise the eviction threshold to gain more usable memory 
without the overhead of unneeded evictions or GC cycles.
+
+The settings that will work well for your system depend on a number of 
factors, including these:
+
+ - The size of the data objects you store in the cache
+Very large data objects can be evicted and garbage collected relatively 
quickly. The same amount of space in use by many small objects takes more 
processing effort to clear and might require lower thresholds to allow eviction 
and GC activities to keep up.
+
+ - Application behavior
+Applications that quickly put a lot of data into the cache can more easily 
overrun the eviction and GC capabilities. Applications that operate more slowly 
may be more easily offset by eviction and GC efforts, possibly allowing you to 
set your thresholds higher than in the more volatile system.
+
+ - Your choice of JVM
+Each JVM has its own GC behavior, which affects how efficiently the collector 
can operate, how quickly it kicks in when needed, and other factors.
+
+## <a id="resource_manager_example_configurations" 
class="no-quick-link"></a>Resource Manager Example Configurations
+
+<a 
id="resource_manager_example_configurations__section_B50C552B114D47F3A63FC906EB282024"></a>
+These examples set the critical threshold to 85 percent of the tenured heap 
and the eviction threshold to 75 percent. The region `bigDataStore` is 
configured to participate in the resource manager's eviction activities.
+
+-   gfsh Example:
+
+    ``` pre
+    gfsh>start server --name=server1 --initial-heap=30MB --max-heap=30MB \
+    --critical-heap-percentage=85 --eviction-heap-percentage=75
+    ```
+
+    ``` pre
+    gfsh>create region --name=bigDataStore --type=PARTITION_HEAP_LRU
+    ```
+
+-   XML:
+
+    ``` pre
+    <cache>
+    <region name="bigDataStore" refid="PARTITION_HEAP_LRU"/>
+    ...
+    <resource-manager critical-heap-percentage="85" 
eviction-heap-percentage="75"/>
+    </cache>
+    ```
+
+    **Note:** The `resource-manager` specification must appear after the 
region declarations in your cache.xml file.
+
+-   Java:
+
+    ``` pre
+    Cache cache = CacheFactory.create();
+
+    ResourceManager rm = cache.getResourceManager();
+    rm.setCriticalHeapPercentage(85);
+    rm.setEvictionHeapPercentage(75);
+
+    RegionFactory rf =
+      cache.createRegionFactory(RegionShortcut.PARTITION_HEAP_LRU);
+      Region region = rf.create("bigDataStore");
+    ```
+
+## <a 
id="resource_manager_example_configurations__section_95497FDF114A4DC8AC5D899E05E324E5"
 class="no-quick-link"></a>Use Case for the Example Code
+
+This is one possible scenario for the configuration used in the examples:
+
+-   A 64-bit Java VM with 8 Gb of heap space on a 4 CPU system running Linux.
+-   The data region bigDataStore has approximately 2-3 million small values 
with average entry size of 512 bytes. So approximately 4-6 Gb of the heap is 
for region storage.
+-   The member hosting the region also runs an application that may take up to 
1 Gb of the heap.
+-   The application must never run out of heap space and has been crafted such 
that data loss in the region is acceptable if the heap space becomes limited 
due to application issues, so the default `lru-heap-percentage` action destroy 
is suitable.
+-   The application's service guarantee makes it very intolerant of 
`OutOfMemoryException` errors. Testing has shown that leaving 15% head room 
above the critical threshold when adding data to the region gives 99.5% uptime 
with no `OutOfMemoryException` errors, when configured with the CMS garbage 
collector using `-XX:CMSInitiatingOccupancyFraction=70`.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/heap_use/lock_memory.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/heap_use/lock_memory.html.md.erb 
b/geode-docs/managing/heap_use/lock_memory.html.md.erb
new file mode 100644
index 0000000..3c7e23f
--- /dev/null
+++ b/geode-docs/managing/heap_use/lock_memory.html.md.erb
@@ -0,0 +1,35 @@
+---
+title: Locking Memory (Linux Systems Only)
+---
+
+<a id="locking-memory"></a>
+
+
+On Linux systems, you can lock memory to prevent the operating system from 
paging out heap or off-heap memory.
+
+To use this feature:
+
+1.  Configure the operating system limits for locked memory. Increase the 
operating system's `ulimit -l` value (the maximum size that may be locked in 
memory) from the default (typically 32 KB or 64 KB) to at least the total 
amount of memory used by Geode for on-heap or off-heap storage. To view the 
current setting, enter `ulimit -a` at a shell prompt and find the value for 
`max locked                         memory`:
+
+    ``` pre
+    # ulimit -a
+    ...
+    max locked memory       (kbytes, -l) 64
+    ...
+    ```
+
+    Use `ulimit -l max-size-in-kbytes` to raise the limit. For example, to set 
the locked memory limit to 64 GB:
+
+    ``` pre
+    # ulimit -l 64000000
+    ```
+
+2.  Using locked memory in this manner increases the time required to start 
Geode. The additional time required to start Geode depends on the total amount 
of memory used, and can range from several seconds to 10 minutes or more. To 
improve startup time and reduce the potential of member timeouts, instruct the 
kernel to free operating system page caches just before starting a Geode member 
by issuing the following command:
+
+    ``` pre
+    $ echo 1 > /proc/sys/vm/drop_caches
+    ```
+
+3.  Start each Geode data store with the gfsh `-lock-memory=true` option. If 
you deploy more than one server per host, begin by starting each server 
sequentially. Starting servers sequentially avoids a race condition in the 
operating system that can cause failures (even machine crashes) if you 
accidentally over-allocate the available RAM. After you verify that the system 
configuration is stable, you can then start servers concurrently.
+
+

Reply via email to