http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/heap_use/off_heap_management.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/heap_use/off_heap_management.html.md.erb 
b/geode-docs/managing/heap_use/off_heap_management.html.md.erb
new file mode 100644
index 0000000..0c8bf7c
--- /dev/null
+++ b/geode-docs/managing/heap_use/off_heap_management.html.md.erb
@@ -0,0 +1,192 @@
+---
+title: Managing Off-Heap Memory
+---
+<a id="managing-off-heap-memory"></a>
+
+
+Geode can be configured to store region values in off-heap memory, which is 
memory within the JVM that is not subject to Java garbage collection.
+
+Garbage collection (GC) within a JVM can prove to be a performance impediment. 
A server cannot exert control over when garbage collection within the JVM heap 
memory takes place, and the server has little control over the triggers for 
invocation. Off-heap memory offloads values to a storage area that is not 
subject to Java GC. By taking advantage of off-heap storage, an application can 
reduce the amount of heap storage that is subject to GC overhead.
+
+Off-heap memory works in conjunction with the heap, it does not replace it. 
The keys are stored in heap memory space. Geode's own memory manager handles 
the off-heap memory with better performance than the Java garbage collector 
would for certain sets of region data.
+
+The resource manager monitors the contents of off-heap memory and invokes 
memory management operations in accordance with two thresholds similar to those 
used for monitoring the JVM heap: `eviction-off-heap-percentage` and 
`critical-off-heap-percentage`.
+
+## On-heap and Off-heap Objects
+
+The following objects are always stored in the JVM heap:
+
+-   Region metadata
+-   Entry metadata
+-   Keys
+-   Indexes
+-   Subscription queue elements
+
+The following objects can be stored in off-heap memory:
+
+-   Values - maximum value size is 2GB
+-   Reference counts
+-   List of free memory blocks
+-   WAN queue elements
+
+**Note:**
+Do not use functional range indexes with off-heap data, as they are not 
supported. An attempt to do so generates an exception.
+
+## Off-heap Recommendations
+
+Off-heap storage is best suited to data patterns where:
+
+-   Stored values are relatively uniform in size
+-   Stored values are mostly less than 128K in size
+-   The usage patterns involve cycles of many creates followed by destroys or 
clear
+-   The values do not need to be frequently deserialized
+-   Many of the values are long-lived reference data
+
+Be aware that Geode has to perform extra work to access the data stored in 
off-heap memory since it is stored in serialized form. This extra work may 
cause some use cases to run slower in an off-heap configuration, even though 
they use less memory and avoid garbage collection overhead. However, even with 
the extra deserialization, off-heap storage may give you the best performance. 
Features that may increase overhead include
+
+-   frequent updates
+-   stored values of widely varying sizes
+-   deltas
+-   queries
+
+## Implementation Details
+
+The off-heap memory manager is efficient at handling region data values that 
are all the same size or are of fixed sizes. With fixed and same-sized data 
values allocated within the off-heap memory, freed chunks can often be re-used, 
and there is little or no need to devote cycles to defragmentation.
+
+Region values that are less than or equal to eight bytes in size will not 
reside in off-heap memory, even if the region is configured to use off-heap 
memory. These very small size region values reside in the JVM heap in place of 
a reference to an off-heap location. This performance enhancement saves space 
and load time.
+
+## Controlling Off-heap Use with the Resource Manager
+
+The Geode resource manager controls off-heap memory by means of two 
thresholds, in much the same way as it does JVM heap memory. See [Using the 
Geode Resource Manager](heap_management.html#how_the_resource_manager_works). 
The resource manager prevents the cache from consuming too much off-heap memory 
by evicting old data. If the off-heap memory manager is unable to keep up, the 
resource manager refuses additions to the cache until the off-heap memory 
manager has freed an adequate amount of memory.
+
+The resource manager has two threshold settings, each expressed as a 
percentage of the total off-heap memory. Both are disabled by default.
+
+1.  **Eviction Threshold**. The percentage of off-heap memory at which 
eviction should begin. Evictions continue until the resource manager determines 
that off-heap memory use is again below the eviction threshold. Set the 
eviction threshold with the `eviction-off-heap-percentage` region attribute. 
The resource manager enforces an eviction threshold only on regions with the 
HEAP\_LRU characteristic. If critical threshold is non-zero, the default 
eviction threshold is 5% below the critical threshold. If critical threshold is 
zero, the default eviction threshold is 80% of total off-heap memory.
+
+    The resource manager enforces eviction thresholds only on regions whose 
LRU eviction policies are based on heap percentage. Regions whose eviction 
policies based on entry count or memory size use other mechanisms to manage 
evictions. See [Eviction](../../developing/eviction/chapter_overview.html) for 
more detail regarding eviction policies.
+
+2.  **Critical Threshold**. The percentage of off-heap memory at which the 
cache is at risk of becoming inoperable. When cache use exceeds the critical 
threshold, all activity that might add data to the cache is refused. Any 
operation that would increase consumption of off-heap memory throws a 
`LowMemoryException` instead of completing its operation. Set the critical 
threshold with the `critical-off-heap-percentage` region attribute.
+
+    Critical threshold is enforced on all regions, regardless of LRU eviction 
policy, though it can be set to zero to disable its effect.
+
+## Specifying Off-heap Memory
+
+To use off-heap memory, specify the following options when setting up servers 
and regions:
+
+-   Start the JVM as described in [Tuning the JVM's Garbage Collection 
Parameters](heap_management.html#section_590DA955523246ED980E4E351FF81F71). In 
particular, set the initial and maximum heap sizes to the same value. Sizes 
less than 32GB are optimal when you plan to use off-heap memory.
+-   From gfsh, start each server that will support off-heap memory with a 
non-zero `off-heap-memory-size` value, specified in megabytes (m) or gigabytes 
(g). If you plan to use the resource manager, specify critical threshold, 
eviction threshold, or (in most cases) both.
+
+    Example:
+
+    ``` pre
+    gfsh> start server --name=server1 -–initial-heap=10G -–max-heap=10G 
-–off-heap-memory-size=200G \
+    -–lock-memory=true -–critical-off-heap-percentage=90 
-–eviction-off-heap-percentage=80
+    ```
+
+-   Mark regions whose entry values should be stored off-heap by setting the 
`off-heap` region attribute to `true` Configure other region attributes 
uniformly for all members that host data for the same region. .
+
+    Example:
+
+    ``` pre
+    gfsh>create region --name=region1 --type=PARTITION_HEAP_LRU --off-heap=true
+    ```
+
+## gfsh Off-heap Support
+
+gfsh supports off-heap memory in server and region creation operations and in 
reporting functions:
+
+alter disk-store  
+`--off-heap=(true | false)` resets the off-heap attribute for the specified 
region. See [alter 
disk-store](../../tools_modules/gfsh/command-pages/alter.html#topic_99BCAD98BDB5470189662D2F308B68EB)
 for details.
+
+create region  
+`--off-heap=(true | false) `sets the off-heap attribute for the specified 
region. See [create 
region](../../tools_modules/gfsh/command-pages/create.html#topic_54B0985FEC5241CA9D26B0CE0A5EA863)
 for details.
+
+describe member  
+displays off-heap size
+
+describe offline-disk-store  
+shows if an off-line region is off-heap
+
+describe region  
+displays the value of a region's off-heap attribute
+
+show metrics  
+includes off-heap metrics `maxMemory`, `freeMemory`, `usedMemory`, `objects`, 
`fragmentation` and `defragmentationTime`
+
+start server  
+supports off-heap options `--lock-memory`, `‑‑off-heap-memory-size`, 
`‑‑critical-off-heap-percentage`, and `‑‑eviction-off-heap-percentage` 
See [start 
server](../../tools_modules/gfsh/command-pages/start.html#topic_3764EE2DB18B4AE4A625E0354471738A)
 for details.
+
+## ResourceManager API
+
+The `org.apache.geode.cache.control.ResourceManager` interface defines methods 
that support off-heap use:
+
+-   `public void setCriticalOffHeapPercentage(float Percentage)`
+-   `public float getCriticalOffHeapPercentage()`
+-   `public void setEvictionOffHeapPercentage(float Percentage)`
+-   `public float getEvictionOffHeapPercentage()`
+
+The gemfire.properties file supports one off-heap property:
+
+`off-heap-memory-size`  
+Specifies the size of off-heap memory in megabytes (m) or gigabytes (g). For 
example:
+
+``` pre
+off-heap-memory-size=4096m
+off-heap-memory-size=120g
+```
+
+See [gemfire.properties and gfsecurity.properties (Geode 
Properties)](../../reference/topics/gemfire_properties.html) for details.
+
+The cache.xml file supports one region attribute:
+
+`off-heap(=true | false)`  
+Specifies that the region uses off-heap memory; defaults to `false`. For 
example:
+
+``` pre
+<region-attributes
+  off-heap="true">
+</region-attributes>
+```
+
+See 
[&lt;region-attributes&gt;](../../reference/topics/cache_xml.html#region-attributes)
 for details.
+
+The cache.xml file supports two resource manager attributes:
+
+`critical-off-heap-percentage=value`  
+Specifies the percentage of off-heap memory at or above which the cache is 
considered in danger of becoming inoperable due to out of memory exceptions. 
See 
[&lt;resource-manager&gt;](../../reference/topics/cache_xml.html#resource-manager)
 for details.
+
+`eviction-off-heap-percentage=value`  
+Specifies the percentage of off-heap memory at or above which eviction should 
begin. Can be set for any region, but actively operates only in regions 
configured for HEAP\_LRU eviction. See 
[&lt;resource-manager&gt;](../../reference/topics/cache_xml.html#resource-manager)
 for details.
+
+For example:
+
+``` pre
+<cache>
+...
+   <resource-manager 
+      critical-off-heap-percentage="99.9" 
+      eviction-off-heap=-percentage="85"/>
+...
+</cache>
+```
+
+## <a id="managing-off-heap-memory__section_o4s_tg5_gv" 
class="no-quick-link"></a>Tuning Off-heap Memory Usage
+
+Geode collects statistics on off-heap memory usage which you can view with the 
gfsh `show metrics` command. See [Off-Heap 
(OffHeapMemoryStats)](../../reference/statistics/statistics_list.html#topic_ohc_tjk_w5)
 for a description of available off-heap statistics.
+
+Off-heap memory is optimized, by default, for storing values of 128 KB in 
size. This figure is known as the "maximum optimized stored value size," which 
we will denote here by *maxOptStoredValSize*. If your data typically runs 
larger, you can enhance performance by increasing the 
OFF\_HEAP\_FREE\_LIST\_COUNT system parameter to a number larger than 
`maxOptStoredValSize/8`, where *maxOptStoredValSize* is expressed in KB (1024 
bytes). So, the default values correspond to:
+
+``` pre
+128 KB / 8 = (128 * 1024) / 8 = 131,072 / 8 = 16,384
+-Dgemfire.OFF_HEAP_FREE_LIST_COUNT=16384
+```
+
+To optimize for a maximum optimized stored value size that is twice the 
default, or 256 KB, the free list count should be doubled:
+
+``` pre
+-Dgemfire.OFF_HEAP_FREE_LIST_COUNT=32768
+```
+
+During the tuning process, you can toggle the `off-heap` region attribute on 
and off, leaving other off-heap settings and parameters in place, in order to 
compare your application's on-heap and off-heap performance.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/configuring_log4j2.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/configuring_log4j2.html.md.erb 
b/geode-docs/managing/logging/configuring_log4j2.html.md.erb
new file mode 100644
index 0000000..09239d7
--- /dev/null
+++ b/geode-docs/managing/logging/configuring_log4j2.html.md.erb
@@ -0,0 +1,51 @@
+---
+title:  Advanced Users—Configuring Log4j 2 for Geode
+---
+
+Basic Geode logging configuration is configured via the gemfire.properties 
file. This topic is intended for advanced users who need increased control over 
logging due to integration with third-party libraries.
+
+The default `log4j2.xml` that Geode uses is stored in geode.jar as 
`log4j2-default.xml`. The contents of the configuration can be viewed in the 
product distribution in the following location: 
`$GEMFIRE/defaultConfigs/log4j2.xml`.
+
+To specify your own `log4j2.xml` configuration file (or anything else 
supported by Log4j 2 such as .json or .yaml), use the following flag when 
starting up your JVM or Geode member:
+
+``` pre
+-Dlog4j.configurationFile=<location-of-your-file>
+```
+
+If the Java system property `log4j.configurationFile` is specified, then Geode 
will not use the `log4j2-default.xml` included in geode.jar. However, Geode 
will still create and register a AlertAppender and LogWriterAppender if the 
`alert-level` and `log-file` Geode properties are configured. You can then use 
the Geode LogWriter to log to Geode's log or to generate an Alert and receive 
log statements from customer's application and all third party libraries. 
Alternatively, you can use any front-end logging API that is configured to log 
to Log4j 2.
+
+## Using Different Front-End Logging APIs to Log to Log4j2
+
+You can also configure Log4j 2 to work with various popular and commonly used 
logging APIs. To obtain and configure the most popular front-end logging APIs 
to log to Log4j 2, see the instructions on the Apache Log4j 2 web site at 
[http://logging.apache.org/log4j/2.x/](http://logging.apache.org/log4j/2.x/).
+
+For example, if you are using:
+
+-   **Commons Logging**, download "Commons Logging Bridge" 
(`log4j-jcl-2.1.jar`)
+-   **SLF4J**, download "SLFJ4 Binding" (`log4j-slf4j-impl-2.1.jar`)
+-   **java.util.logging**, download the "JUL adapter" (`log4j-jul-2.1.jar`)
+
+See 
[http://logging.apache.org/log4j/2.x/faq.html](http://logging.apache.org/log4j/2.x/faq.html)
 for more examples.
+
+All three of the above JAR files are in the full distribution of Log4J 2.1 
which can be downloaded at 
[http://logging.apache.org/log4j/2.x/download.html](http://logging.apache.org/log4j/2.x/download.html).
 Download the appropriate bridge, adapter, or binding JARs to ensure that Geode 
logging is integrated with every logging API used in various third-party 
libraries or in your own applications.
+
+**Note:**
+Apache Geode has been tested with Log4j 2.1. As newer versions of Log4j 2 come 
out, you can find 2.1 under Previous Releases on that page.
+
+## Customizing Your Own log4j2.xml File
+
+Advanced users may want to move away entirely from setting `log-*` gemfire 
properties and instead specify their own `log4j2.xml` using 
`-Dlog4j.configurationFile`.
+
+Custom Log4j 2 configuration in Geode comes with some caveats and notes:
+
+-   Do not use `"monitorInterval="` in your log4j2.xml file because doing so 
can have significant performance impact. This setting instructs Log4j 2 to 
monitor the log4j2.xml config file at runtime and automatically reload and 
reconfigure if the file changes.
+-   Geode's default `log4j2.xml` specifies status="FATAL" because Log4j 2's 
StatusLogger generates warnings to standard out at ERROR level anytime Geode 
stops its AlertAppender or LogWriterAppender. Geode uses a lot of concurrent 
threads that are executing code with log statements; these threads may be 
logging while the Geode appenders are being stopped.
+-   Geode's default log4j2.xml specifies `shutdownHook="disable"` because 
Geode has a shutdown hook which disconnects the DistributedSystem and closes 
the Cache, which is executing the code that performs logging. If the Log4J2 
shutdown hook stops logging before Geode completes its shutdown, Log4j 2 will 
attempt to start back up. This restart in turn attempts to register another 
Log4j 2 shutdown hook which fails resulting in a FATAL level message logged by 
Log4j 2.
+-   The GEMFIRE\_VERBOSE marker (Log4J2 Marker are discussed on 
[http://logging.apache.org/log4j/2.x/manual/markers.html](http://logging.apache.org/log4j/2.x/manual/markers.html))
 can be used to enable additional verbose log statements at TRACE level. Many 
log statements are enabled simply by enabling DEBUG or TRACE. However, even 
more log statements can be further enabled by using MarkerFilter to accept 
GEMFIRE\_VERBOSE. The default Geode `log4j2.xml` disables GEMFIRE\_VERBOSE with 
this line:
+
+    ``` pre
+    <MarkerFilter marker="GEMFIRE_VERBOSE" onMatch="DENY" 
onMismatch="NEUTRAL"/> 
+    ```
+
+    You can enable the GEMFIRE\_VERBOSE log statements by changing 
`onMatch="DENY"` to `onMatch="ACCEPT"`. Typically, it's more useful to simply 
enable DEBUG or TRACE on certain classes or packages instead of for the entire 
Geode product. However, this setting can be used for internal debugging 
purposes if all other debugging methods fail.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/how_logging_works.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/how_logging_works.html.md.erb 
b/geode-docs/managing/logging/how_logging_works.html.md.erb
new file mode 100644
index 0000000..0f223a4
--- /dev/null
+++ b/geode-docs/managing/logging/how_logging_works.html.md.erb
@@ -0,0 +1,22 @@
+---
+title:  How Geode Logging Works
+---
+
+Apache Geode uses Apache Log4j 2 as the basis for its logging system.
+
+Geode uses [Apache Log4j 2](http://logging.apache.org/log4j/2.x/) API and Core 
libraries as the basis for its logging system. Log4j 2 API is a popular and 
powerful front-end logging API used by all the Geode classes to generate log 
statements. Log4j 2 Core is a backend implementation for logging; you can route 
any of the front-end logging API libraries to log to this backend. Geode uses 
the Core backend to run two custom Log4j 2 Appenders: **AlertAppender** and 
**LogWriterAppender**.
+
+Geode has been tested with Log4j 2.1.
+
+**Note:**
+For this reason, Geode now always requires the following JARs to be in the 
classpath: `log4j-api-2.1.jar`, `log4j-core-2.1.jar`. Both of these JARs are 
distributed in the `$GEMFIRE/lib` directory and included in the appropriate 
`*-dependencies.jar` convenience libraries.
+
+**AlertAppender** is the component that generates Geode alerts that are then 
managed by the JMX Management and Monitoring system. See [Notification 
Federation](../management/notification_federation_and_alerts.html#topic_212EE5A2ABAB4E8E8EF71807C9ECEF1A)
 for more details.
+
+**LogWriterAppender** is the component that is configured by all the `log-*` 
Geode properties such as `log-file`, `log-file-size-limit` and 
`log-disk-space-limit`.
+
+Both of these appenders are created and controlled programmatically. You 
configure their behavior with the `log-*` Geode properties and the alert level 
that is configured within the JMX Management & Monitoring system. These 
appenders do not currently support configuration within a `log4j2.xml` config 
file.
+
+Advanced users may wish to define their own `log4j2.xml`. See [Advanced 
Users—Configuring Log4j 2 for Geode](configuring_log4j2.html) for more 
details.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/log_collection_utility.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/log_collection_utility.html.md.erb 
b/geode-docs/managing/logging/log_collection_utility.html.md.erb
new file mode 100644
index 0000000..92d324b
--- /dev/null
+++ b/geode-docs/managing/logging/log_collection_utility.html.md.erb
@@ -0,0 +1,54 @@
+---
+title:  Log Collection Utility
+---
+
+To aid in the troubleshooting of Apache Geode issues, you can use the provided 
log collection utility to gather and upload log files and other troubleshooting 
artifacts. This tool is only supported on Linux machines.
+
+This utility is used to gather log files and other troubleshooting artifacts 
from a Geode cluster.
+
+The tool goes through and collects all files ending with `.log`, `.err`, 
.`cfg`, `.gfs`, `.stack`, `.xml`, `.properties,` and `.txt` from the working 
directories of running Geode processes. It also obtains thread dumps for each 
Geode process but will not collect heap dumps.
+
+The collection utility copies all log and artifact files to its host machine 
and then compresses all the files. You should ensure that the machine running 
the utility has sufficient disk space to hold all the collected log and 
artifact files from the cluster.
+
+In default mode, the tool requires that a Geode process is running on each 
machine where the tool is gathering logs and artifact files. If you would like 
to collect log and artifact files from a machine or machines where Geode 
processes are not running, use *Static Copy Mode* by specifying the `-m` option 
and providing a file that lists log and artifact file locations.
+
+The utility is provided in `$GEMFIRE/tools/LogCollectionUtility`.
+
+## Usage
+
+``` pre
+java -jar gfe-logcollect.jar -c <company> -o <output dir> [OPTIONS]
+
+Required arguments:
+        -c company name to append to output filename
+        -o output directory to store all collected log files
+
+Optional arguments:
+        -a comma separated list of hosts with no spaces. EG. host1,host2,host3 
(defaults to localhost)
+        -u username to use to connect via ssh (defaults to current user)
+        -i identity file to use for PKI based ssh (defaults to 
~/.ssh/id_[dsa|rsa]
+        -p prompt for a password to use for ssh connections
+        -t ticket number to append to created zip file
+        -d don't clean up collected log files after the zip has been created
+        -s send the zip file to Pivotal support
+        -f ftp server to upload collected logs to.  Defaults to 
ftp.gemstone.com
+        -v print version of this utility
+        -h print this help information
+
+Static Copy Mode
+        -m <file> Use a file with log locations instead of scanning for logs.
+           Entries should be in the format hostname:/log/location
+```
+
+## Known Limitations
+
+The following are known limitations with the tool:
+
+1.  Only supports Linux hosts.
+2.  Requires SSH access between machines.
+3.  Requires that the username be the same for each host that this app scans. 
For example, you can't specify user@host1, anotherUser@host2, etc.
+4.  Requires that SSH access is available across all hosts using either the 
same password or the same public key.
+5.  In order to get stacks using jstack, this process must be ran as the same 
user who owns the Geode process.
+6.  Requires 'jps' (typically in $JAVA\_HOME/bin) to be in the user's PATH on 
each machine.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/logging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/logging.html.md.erb 
b/geode-docs/managing/logging/logging.html.md.erb
new file mode 100644
index 0000000..e8bdb13
--- /dev/null
+++ b/geode-docs/managing/logging/logging.html.md.erb
@@ -0,0 +1,31 @@
+---
+title:  Logging
+---
+
+Comprehensive logging messages help you confirm system configuration and debug 
problems in configuration and code.
+
+-   **[How Geode Logging 
Works](../../managing/logging/how_logging_works.html)**
+
+    Apache Geode uses Apache Log4j 2 as the basis for its logging system.
+
+-   **[Understanding Log Messages and Their 
Categories](../../managing/logging/logging_categories.html)**
+
+    System logging messages typically pertain to startup; logging management; 
connection and system membership; distribution; or cache, region, and entry 
management.
+
+-   **[Naming, Searching, and Creating Log 
Files](../../managing/logging/logging_whats_next.html)**
+
+    The best way to manage and understand the logs is to have each member log 
to its own files.
+
+-   **[Set Up Logging](../../managing/logging/setting_up_logging.html)**
+
+    You configure logging in a member's `gemfire.properties` or at startup 
with `gfsh`.
+
+-   **[Advanced Users—Configuring Log4j 2 for 
Geode](../../managing/logging/configuring_log4j2.html)**
+
+    Basic Geode logging configuration is configured via the gemfire.properties 
file. This topic is intended for advanced users who need increased control over 
logging due to integration with third-party libraries.
+
+-   **[Log Collection 
Utility](../../managing/logging/log_collection_utility.html)**
+
+    To aid in the troubleshooting of Apache Geode issues, you can use the 
provided log collection utility to gather and upload log files and other 
troubleshooting artifacts. This tool is only supported on Linux machines.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/logging_categories.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/logging_categories.html.md.erb 
b/geode-docs/managing/logging/logging_categories.html.md.erb
new file mode 100644
index 0000000..f22a23d
--- /dev/null
+++ b/geode-docs/managing/logging/logging_categories.html.md.erb
@@ -0,0 +1,230 @@
+---
+title:  Understanding Log Messages and Their Categories
+---
+
+System logging messages typically pertain to startup; logging management; 
connection and system membership; distribution; or cache, region, and entry 
management.
+
+-   **Startup information**. Describe the Java version, the Geode native 
version, the host system, current working directory, and environment settings. 
These messages contain all information about the system and configuration the 
process is running with.
+-   **Logging management**. Pertain to the maintenance of the log files 
themselves. This information is always in the main log file (see the discussion 
at Log File Name).
+-   **Connections and system membership**. Report on the arrival and departure 
of distributed system members (including the current member) and any 
information related to connection activities or failures. This includes 
information on communication between tiers in a hierarchical cache.
+-   **Distribution**. Report on the distribution of data between system 
members. These messages include information about region configuration, entry 
creation and modification, and region and entry invalidation and destruction.
+-   **Cache, region, and entry management**. Cache initialization, listener 
activity, locking and unlocking, region initialization, and entry updates.
+
+## <a id="how_logging_works__section_578DFE8DD92F4237A8571593EAC9C3B1" 
class="no-quick-link"></a>Structure of a Log Message
+
+Every logged message contains:
+
+-   The message header within square brackets:
+    1.  The message level
+    2.  The time the message was logged
+    3.  The ID of the connection and thread that logged the message, which 
might be the main program or a system management process
+-   The message itself, which can be a string and/or an exception with the 
exception stack trace
+
+``` pre
+[config 2005/11/08 15:46:08.710 PST PushConsumer main nid=0x1]
+Cache initialized using "file:/Samples/quickstart/xml/PushConsumer.xml".
+```
+
+## <a id="how_logging_works__section_43A099C67FF04A1EB0A07B617D653A38" 
class="no-quick-link"></a>Log File Name
+
+Specify your Geode system member's main log in the gemfire property `log-file` 
setting.
+
+Geode uses this name for the most recent log file, actively in use if the 
member is running, or used for the last run. Geode creates the main log file 
when the application starts.
+
+By default, the main log contains the entire log for the member session. If 
you specify a `log-file-size-limit`, Geode splits the logging into these files:
+
+-   **The main, current log**. Holding current logging entries. Named with the 
string you specified in `log-file`.
+-   **Child logs**. Holding older logging entries. These are created by 
renaming the main, current log when it reaches the size limit.
+-   **A metadata log file, with `meta-` prefixed to the name**. Used to track 
of startup, shutdown, child log management, and other logging management 
operations
+
+The current log is renamed, or rolled, to the next available child log when 
the specified size limit is reached.
+
+When your application connects with logging enabled, it creates the main log 
file and, if required, the `meta-` log file. If the main log file is present 
when the member starts up, it is renamed to the next available child log to 
make way for new logging.
+
+Your current, main log file always has the name you specified in `log-file`. 
The old log files and child log files have names derived from the main log file 
name. These are the pieces of a renamed log or child log file name where 
`filename.extension` is the `log-file` specification
+
+<img src="../../images/logging-1.gif" 
id="how_logging_works__image_A144E5195FDA49A1A8914F233495BA88" class="image" />
+
+If child logs are not used, the child file sequence number is a constant 00 
(two zeros).
+
+For locators, the log file name is fixed. For the standalone locator started 
in `gfsh`, it is always named `<locator_name>.log` where the locator\_name 
corresponds to the name specified at locator startup. For the locator that runs 
colocated inside another member, the log file is the member’s log file.
+
+For applications and the servers, your log file specification can be relative 
or absolute. If no file is specified, the defaults are standard output for 
applications and `<server_name>.log` for servers started with gfsh and 
`cacheserver.log` for servers started with the older cacheserver script.
+
+To figure out the member's most recent activities, look at the `meta-` log 
file or, if no meta file exists, the main log file.
+
+## <a id="how_logging_works__section_D464FDFFC30141F385689A47CE5E8D38" 
class="no-quick-link"></a>How the System Renames Logs
+
+The log file that you specify is the base name used for all logging and 
logging archives. If a log file with the specified name already exists at 
startup, the distributed system automatically renames it before creating the 
current log file. This is a typical directory listing after a few runs with 
`log-file=system.log`:
+
+``` pre
+bash-2.05$ ls -tlra system*
+-rw-rw-r-- 1 jpearson users 11106 Nov 3 11:07 system-01-00.log
+-rw-rw-r-- 1 jpearson users 11308 Nov 3 11:08 system-02-00.log
+-rw-rw-r-- 1 jpearson users 11308 Nov 3 11:09 system.log
+bash-2.05$
+```
+
+The first run created `system.log` with a timestamp of Nov 3 11:07. The second 
run renamed that file to `system-01-00.log` and created a new `system.log` with 
a timestamp of Nov 3 11:08. The third run renamed that file to 
`system-02-00.log` and created the file named `system.log` in this listing.
+
+When the distributed system renames the log file, it assigns the next 
available number to the new file, as XX of `filename-XX-YY.extension`. This 
next available number depends on existing old log files and also on any old 
statistics archives. The system assigns the next number that is higher than any 
in use for statistics or logging. This keeps current log files and statistics 
archives paired up regardless of the state of the older files in the directory. 
Thus, if an application is archiving statistics and logging to `system.log` and 
`statArchive.gfs`, and it runs in a Unix directory with these files:
+
+``` pre
+bash-2.05$ ls -tlr stat* system*
+-rw-rw-r-- 1 jpearson users 56143 Nov 3 11:07 statArchive-01-00.gfs
+-rw-rw-r-- 1 jpearson users 56556 Nov 3 11:08 statArchive-02-00.gfs
+-rw-rw-r-- 1 jpearson users 56965 Nov 3 11:09 statArchive-03-00.gfs
+-rw-rw-r-- 1 jpearson users 11308 Nov 3 11:27 system-01-00.log
+-rw-rw-r-- 1 jpearson users 59650 Nov 3 11:34 statArchive.gfs
+-rw-rw-r-- 1 jpearson users 18178 Nov 3 11:34 system.log
+```
+
+the directory contents after the run look like this (changed files in 
**bold**):
+
+``` pre
+bash-2.05$ ls -ltr stat* system*
+-rw-rw-r-- 1 jpearson users 56143 Nov 3 11:07 statArchive-01-00.gfs
+-rw-rw-r-- 1 jpearson users 56556 Nov 3 11:08 statArchive-02-00.gfs
+-rw-rw-r-- 1 jpearson users 56965 Nov 3 11:09 statArchive-03-00.gfs
+-rw-rw-r-- 1 jpearson users 11308 Nov 3 11:27 system-01-00.log
+-rw-rw-r-- 1 jpearson users 59650 Nov 3 11:34 statArchive-04-00.gfs
+-rw-rw-r-- 1 jpearson users 18178 Nov 3 11:34 system-04-00.log
+-rw-rw-r-- 1 jpearson users 55774 Nov 4 10:08 statArchive.gfs
+-rw-rw-r-- 1 jpearson users 17681 Nov 4 10:08 system.log
+
+```
+
+The statistics and the log file are renamed using the next integer that is 
available to both, so the log file sequence jumps past the gap in this case.
+
+## <a id="how_logging_works__section_02D8D53AC740490D842C6525FA7DB815" 
class="no-quick-link"></a>Log Level
+
+The higher the log level, the more important and urgent the message. If you 
are having problems with your system, a first-level approach is to lower the 
log-level (thus sending more of the detailed messages to the log file) and 
recreate the problem. The additional log messages often help uncover the source.
+
+These are the levels, in descending order, with sample output:
+
+-   **severe (highest level)**. This level indicates a serious failure. In 
general, severe messages describe events that are of considerable importance 
that will prevent normal program execution. You will likely need to shut down 
or restart at least part of your system to correct the situation.
+
+    This severe error was produced by configuring a system member to connect 
to a non-existent locator:
+
+    ``` pre
+    [severe 2005/10/24 11:21:02.908 PDT nameFromGemfireProperties
+    DownHandler (FD_SOCK) nid=0xf] GossipClient.getInfo():
+    exception connecting to host localhost:30303:
+    java.net.ConnectException: Connection refused
+    ```
+
+-   **error**. This level indicates that something is wrong in your system. 
You should be able to continue running, but the operation noted in the error 
message failed.
+
+    This error was produced by throwing a `Throwable` from a `CacheListener`. 
While dispatching events to a customer-implemented cache listener, Geode 
catches any `Throwable` thrown by the listener and logs it as an error. The 
text shown here is followed by the output from the `Throwable` itself.
+
+    ``` pre
+    [error 2007/09/05 11:45:30.542 PDT gemfire1_newton_18222
+    <vm_2_thr_5_client1_newton_18222-0x472e> nid=0x6d443bb0]
+    Exception occurred in CacheListener
+    ```
+
+-   **warning**. This level indicates a potential problem. In general, warning 
messages describe events that are of interest to end users or system managers, 
or that indicate potential problems in the program or system.
+
+    This message was obtained by starting a client with a Pool configured with 
queueing enabled when there was no server running to create the client’s 
queue:
+
+    ``` pre
+    [warning 2008/06/09 13:09:28.163 PDT <queueTimer-client> tid=0xe]
+    QueueManager - Could not create a queue. No queue servers available
+    ```
+
+    This message was obtained by trying to get an entry in a client region 
while there was no server running to respond to the client request:
+
+    ``` pre
+    [warning 2008/06/09 13:12:31.833 PDT <main> tid=0x1] Unable to create a
+    connection in the allowed time
+    org.apache.geode.cache.client.NoAvailableServersException
+        at 
org.apache.geode.cache.client.internal.pooling.ConnectionManagerImpl.
+    borrowConnection(ConnectionManagerImpl.java:166)
+    . . .
+    org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1122
+    )
+    ```
+
+-   **info**. This is for informational messages, typically geared to end 
users and system administrators.
+
+    This is a typical info message created at system member startup. This 
indicates that no other `DistributionManager`s are running in the distributed 
system, which means no other system members are running:
+
+    ``` pre
+    [info 2005/10/24 11:51:35.963 PDT CacheRunner main nid=0x1]
+    DistributionManager straw(7368):41714 started on 192.0.2.0[10333]
+    with id straw(7368):41714 (along with 0 other DMs)
+    ```
+
+    When another system member joins the distributed system, these info 
messages are output by the members that are already running:
+
+    ``` pre
+    [info 2005/10/24 11:52:03.934 PDT CacheRunner P2P message reader for
+    straw(7369):41718 nid=0x21] Member straw(7369):41718 has joined the
+    distributed cache.
+    ```
+
+    When another member leaves because of an interrupt or through normal 
program termination:
+
+    ``` pre
+    [info 2005/10/24 11:52:05.128 PDT CacheRunner P2P message reader for
+    straw(7369):41718 nid=0x21] Member straw(7369):41718 has left the
+    distributed cache.
+    ```
+
+    And when another member is killed:
+
+    ``` pre
+    [info 2005/10/24 13:08:41.389 PDT CacheRunner DM-Puller nid=0x1b] Member
+    straw(7685):41993 has unexpectedly left the distributed cache.
+    ```
+
+-   **config**. This is the default setting for logging. This level provides 
static configuration messages that are often used to debug problems associated 
with particular configurations.
+
+    You can use this config message to verify your startup configuration:
+
+    ``` pre
+    [config 2008/08/08 14:28:19.862 PDT CacheRunner <main> tid=0x1] Startup 
Configuration:
+    ack-severe-alert-threshold="0"
+    ack-wait-threshold="15"
+    archive-disk-space-limit="0"
+    archive-file-size-limit="0"
+    async-distribution-timeout="0"
+    async-max-queue-size="8"
+    async-queue-timeout="60000"
+    bind-address=""
+    cache-xml-file="cache.xml"
+    conflate-events="server"
+    conserve-sockets="true"
+      ...
+    socket-buffer-size="32768"
+    socket-lease-time="60000"
+    ssl-ciphers="any"
+    ssl-enabled="false"
+    ssl-protocols="any"
+    ssl-require-authentication="true"
+    start-locator=""
+    statistic-archive-file=""
+    statistic-sample-rate="1000"
+    statistic-sampling-enabled="false"
+    tcp-port="0"
+    udp-fragment-size="60000"
+    udp-recv-buffer-size="1048576"
+    udp-send-buffer-size="65535"
+    ```
+
+-   **fine**. This level provides tracing information that is generally of 
interest to developers. It is used for the lowest volume, most important, 
tracing messages.
+
+    **Note:**
+    Generally, you should only use this level if instructed to do so by 
technical support. At this logging level, you will see a lot of noise that 
might not indicate a problem in your application. This level creates very 
verbose logs that may require significantly more disk space than the higher 
levels.
+
+    ``` pre
+    [fine 2011/06/21 11:27:24.689 PDT <locatoragent_ds_w1-gst-dev04_2104> 
tid=0xe] SSL Configuration:
+        ssl-enabled = false
+    ```
+
+-   **finer, finest, and all**. These levels exist for internal use only. They 
produce a large amount of data and so consume large amounts of disk space and 
system resources.
+    **Note:**
+    Do not use these settings unless asked to do so by technical support.
+
+**Note:**
+Geode no longer supports setting system properties for VERBOSE logging. To 
enable VERBOSE logging, see [Advanced Users—Configuring Log4j 2 for 
Geode](configuring_log4j2.html)

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/logging_whats_next.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/logging_whats_next.html.md.erb 
b/geode-docs/managing/logging/logging_whats_next.html.md.erb
new file mode 100644
index 0000000..4ffda14
--- /dev/null
+++ b/geode-docs/managing/logging/logging_whats_next.html.md.erb
@@ -0,0 +1,39 @@
+---
+title:  Naming, Searching, and Creating Log Files
+---
+
+The best way to manage and understand the logs is to have each member log to 
its own files.
+
+## <a id="logging_whats_next__section_82C0D09E8A414693A7E6342E30209FC4" 
class="no-quick-link"></a>Log File Naming Recommendation
+
+For members running on the same machine, you can have them log to their own 
files by starting them in different working directories and using the same, 
relative `log-file` specification. For example, you could set this in 
`<commonDirectoryPath>/gemfire.properties`:
+
+``` pre
+log-file=./log/member.log
+```
+
+then start each member in a different directory with this command, which 
points to the common properties file:
+
+``` pre
+java -DgemfirePropertyFile=<commonDirectoryPath>/gemfire.properties
+```
+
+This way, each member has its own log files under its own working directory.
+
+## <a id="logging_whats_next__section_5502E3248A424E978B13B1142360F445" 
class="no-quick-link"></a>Searching the Log Files
+
+For the clearest picture, merge the log files, with the `gfsh export           
      logs` command:
+
+``` pre
+gfsh> export logs --dir=myDir --dir=myDir --merge-log=true
+```
+
+Search for lines that begin with these strings:
+
+-   \[warning
+-   \[error
+-   \[severe
+
+## <a id="logging_whats_next__section_32F26033A2134525BCC10F3A6C6FAD7B" 
class="no-quick-link"></a>Creating Your Own Log Messages
+
+In addition to the system logs, you can add your own application logs from 
your Java code. For information on adding custom logging to your applications, 
see the online Java documentation for the `org.apache.geode.LogWriter` 
interface. Both system and application logging is output and stored according 
to your logging configuration settings.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/logging/setting_up_logging.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/logging/setting_up_logging.html.md.erb 
b/geode-docs/managing/logging/setting_up_logging.html.md.erb
new file mode 100644
index 0000000..2222414
--- /dev/null
+++ b/geode-docs/managing/logging/setting_up_logging.html.md.erb
@@ -0,0 +1,59 @@
+---
+title:  Set Up Logging
+---
+
+You configure logging in a member's `gemfire.properties` or at startup with 
`gfsh`.
+
+<a id="setting_up_logging__section_35F8A9028A91441785BCACD6CD40A498"></a>
+Before you begin, make sure you understand [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
+
+1.  Run a time synchronization service such as NTP on all Geode host machines. 
This is the only way to produce logs that are useful for troubleshooting. 
Synchronized time stamps ensure that log messages from different hosts can be 
merged to accurately reproduce a chronological history of a distributed run.
+2.  Use a sniffer to monitor your logs Look for new or unexpected warnings, 
errors, or severe messages. The logs output by your system have their own 
characteristics, indicative of your system configuration and of the particular 
behavior of your applications, so you must become familiar with your 
applications' logs to use them effectively.
+3.  Configure member logging in each member's `gemfire.properties` as needed:
+
+    ``` pre
+    # Default gemfire.properties log file settings
+    log-level=config
+    log-file=
+    log-file-size-limit=0
+    log-disk-space-limit=0
+    ```
+
+    **Note:**
+    You can also specify logging parameters when you start up members (either 
locators or servers) using the `gfsh` command-line utility. In addition, you 
can modify log file properties and log-level settings while a member is already 
running by using the [alter 
runtime](../../tools_modules/gfsh/command-pages/alter.html#topic_7E6B7E1B972D4F418CB45354D1089C2B)
 command.
+
+    1.  Set `log-level`. Options are `severe` (the highest level), `error`, 
`warning`, `info`, `config`, and `fine`. The lower levels include higher level 
settings, so a setting of `warning` would log `warning`, `error`, and `severe` 
messages. For general troubleshooting, we recommend setting the log level at 
`config` or higher.  The `fine` setting can fill up disk rather quickly and 
impact system performance. Use `fine` only if necessary.
+
+    2.  Specify the log file name in `log-file`. This can be relative or 
absolute. If this property is not specified, the defaults are:
+        -   Standard output for applications
+        -   For servers, the default log file location is:
+
+            ``` pre
+            working-directory/server-name.log
+            ```
+
+            By default, when starting a server through `gfsh`, the *working 
-directory* corresponds to the directory (named after itself) that the cache 
server creates upon startup. Alternatively, you can specify a different working 
directory path when you start the cache server. The *server-name* corresponds 
to the name of the cache server provided upon startup.
+        -   For a standalone locator, the default log file location is:
+
+            ``` pre
+            working-directory/locator-name.log
+            ```
+
+            By default, when starting a locator through `gfsh`, the *working 
-directory* corresponds to the directory (named after itself) created when the 
locator starts up. Alternatively, you can specify a different working directory 
path when you start a locator. The *locator-name* corresponds to the name of 
the locator provided upon startup. If you are using a colocated or embedded 
locator, the locator logs will be part of the member’s log file.
+
+        For the easiest logs examination and troubleshooting, send your logs 
to files instead of standard out.
+        **Note:**
+        Make sure each member logs to its own files. This makes the logs 
easier to decipher.
+
+    3.  Set the maximum size of a single log file in `log-file-size-limit`. If 
not set, the single, main log file is used. If set, the metadata file, the main 
log, and rolled child logs are used.
+    4.  Set the maximum size of all log files in `log-disk-space-limit`. If 
non-zero, this limits the combined size of all inactive log files, deleting 
oldest files first to stay under the limit. A zero setting indicates no limit.
+
+4.  If you are using the `gfsh` command-line interface, `gfsh` can creates its 
own log file in the directory where you run the `gfsh` or `gfsh.bat` script. By 
default, gfsh does not generate log files for itself. To enable gfsh logs, set 
the following system property to the desired log level before starting gfsh:
+
+    ``` pre
+    export 
JAVA_ARGS=-Dgfsh.log-level=[severe|warning|info|config|fine|finer|finest]
+    ```
+
+    gfsh log files are named `gfsh-0_0.log`.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/management/configuring_rmi_connector.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/management/configuring_rmi_connector.html.md.erb 
b/geode-docs/managing/management/configuring_rmi_connector.html.md.erb
new file mode 100644
index 0000000..c81a3fa
--- /dev/null
+++ b/geode-docs/managing/management/configuring_rmi_connector.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Configuring RMI Registry Ports and RMI Connectors
+---
+
+Geode programmatically emulates out-of-the-box JMX provided by Java and 
creates a JMXServiceURL with RMI Registry and RMI Connector ports on all 
manageable members.
+
+## <a 
id="concept_BC793A7ACF9A4BD9A29C2DCC6894767D__section_143531EBBCF94033B8058FCF5F8A5A0D"
 class="no-quick-link"></a>Configuring JMX Manager Port and Bind Addresses
+
+You can configure a specific connection port and address when launching a 
process that will host the Geode JMX Manager. To do this, specify values for 
the `jmx-manager-bind-address`, which specifies the JMX manager's IP address 
and `jmx-manager-port`, which defines the RMI connection port.
+
+The default Geode JMX Manager RMI port is 1099. You may need to modify this 
default if 1099 is reserved for other uses.
+
+## <a 
id="concept_BC793A7ACF9A4BD9A29C2DCC6894767D__section_BF6352A05CE94F35A8355232D22AC2BC"
 class="no-quick-link"></a>Using Out-of-the-Box RMI Connectors
+
+If for some reason you need to use standard JMX RMI in your deployment for 
other monitoring purposes, set the Geode property `jmx-manager-port` to 0 on 
any members where you want to use standard JMX RMI.
+
+If you use out-of-the-box JMX RMI instead of starting an embedded Geode JMX 
Manager, you should consider setting 
`-Dsun.rmi.dgc.server.gcInterval=Long.MAX_VALUE-1` when starting the JVM for 
customer applications and client processes. Every Geode process internally sets 
this setting before creating and starting the JMX RMI connector in order to 
prevent full garbage collection from pausing processes.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/management/gfsh_and_management_api.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/management/gfsh_and_management_api.html.md.erb 
b/geode-docs/managing/management/gfsh_and_management_api.html.md.erb
new file mode 100644
index 0000000..d7127d8
--- /dev/null
+++ b/geode-docs/managing/management/gfsh_and_management_api.html.md.erb
@@ -0,0 +1,52 @@
+---
+title:  Executing gfsh Commands through the Management API
+---
+
+You can also use management APIs to execute gfsh commands programmatically.
+
+**Note:**
+If you start the JMX Manager programmatically and wish to enable command 
processing, you must also add the absolute path of `gfsh-dependencies.jar` 
(located in `$GEMFIRE/lib` of your Geode installation) to the CLASSPATH of your 
application. Do not copy this library to your CLASSPATH because this library 
refers to other dependencies in `$GEMFIRE/lib` by a relative path. The 
following code samples demonstrate how to process and execute `gfsh` commands 
using the Java API.
+
+First, retrieve a CommandService instance.
+
+**Note:**
+The CommandService API is currently only available on JMX Manager nodes.
+
+``` pre
+// Get existing CommandService instance or create new if it doesn't exist
+commandService = CommandService.createLocalCommandService(cache);
+
+// OR simply get CommandService instance if it exists, don't create new one
+CommandService commandService = CommandService.getUsableLocalCommandService();
+```
+
+Next, process the command and its output:
+
+``` pre
+// Process the user specified command String
+Result regionListResult = commandService.processCommand("list regions");
+ 
+// Iterate through Command Result in String form line by line
+while (regionListResult.hasNextLine()) {
+   System.out.println(regionListResult.nextLine());
+}
+      
+```
+
+Alternatively, instead of processing the command, you can create a 
CommandStatement Object from the command string which can be re-used.
+
+``` pre
+// Create a command statement that can be reused multiple times
+CommandStatement showDeadLocksCmdStmt = commandService.createCommandStatement
+    ("show dead-locks --file=deadlock-info.txt");
+Result showDeadlocksResult = showDeadLocksCmdStmt.process();
+
+// If there is a file as a part of Command Result, it can be saved 
+// to a specified directory
+if (showDeadlocksResult.hasIncomingFiles()) {
+    showDeadlocksResult.saveIncomingFiles(System.getProperty("user.dir") + 
+                  "/commandresults");
+}
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/management/jmx_manager_node.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/management/jmx_manager_node.html.md.erb 
b/geode-docs/managing/management/jmx_manager_node.html.md.erb
new file mode 100644
index 0000000..ebc4571
--- /dev/null
+++ b/geode-docs/managing/management/jmx_manager_node.html.md.erb
@@ -0,0 +1,20 @@
+---
+title: JMX Manager Operations
+---
+<a id="topic_36C918B4202D45F3AC225FFD23B11D7C"></a>
+
+
+Any member can host an embedded JMX Manager, which provides a federated view 
of all MBeans for the distributed system. The member can be configured to be a 
manager at startup or anytime during its life by invoking the appropriate API 
calls on the ManagementService.
+
+You need to have a JMX Manager started in your distributed system in order to 
use Geode management and monitoring tools such as 
[gfsh](../../tools_modules/gfsh/chapter_overview.html) and [Geode 
Pulse](../../tools_modules/pulse/chapter_overview.html).
+
+**Note:**
+Each node that acts as the JMX Manager has additional memory requirements 
depending on the number of resources that it is managing and monitoring. Being 
a JMX Manager can increase the memory footprint of any process, including 
locator processes. See [Memory Requirements for Cached 
Data](../../reference/topics/memory_requirements_for_cache_data.html#calculating_memory_requirements)
 for more information on calculating memory overhead on your Geode processes.
+
+-   **[Starting a JMX Manager](jmx_manager_operations.html)**
+
+-   **[Configuring a JMX 
Manager](jmx_manager_operations.html#topic_263072624B8D4CDBAD18B82E07AA44B6)**
+
+-   **[Stopping a JMX 
Manager](jmx_manager_operations.html#topic_5B6DF783A14241399DC25C6EE8D0048A)**
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/management/jmx_manager_operations.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/management/jmx_manager_operations.html.md.erb 
b/geode-docs/managing/management/jmx_manager_operations.html.md.erb
new file mode 100644
index 0000000..9dc8a4e
--- /dev/null
+++ b/geode-docs/managing/management/jmx_manager_operations.html.md.erb
@@ -0,0 +1,195 @@
+---
+title: Starting a JMX Manager
+---
+
+<a id="topic_686158E9AFBD47518BE1B4BEB232C190"></a>
+
+
+JMX Manager nodes are members that manage other Geode members (as well as 
themselves). A JMX Manager node can manage all other members in the distributed 
system. Typically a locator will function as the JMX Manager, but you can also 
turn any other distributed system member such as a server into a JMX Manager 
node as well.
+
+To allow a server to become a JMX Manager you configure Geode property 
`jmx-manager=true`, in the server's`gemfire.properties` file. This property 
configures the node to become a JMX Manager node passively; if gfsh cannot 
locate a JMX Manager when connecting to the distributed system, the server node 
will be started as a JMX Manager node.
+
+**Note:**
+The default property setting for all locators is `gemfire.jmx-manager=true`. 
For other members, the default property setting is `gemfire.jmx-manager=false`.
+
+To force a server to become a JMX Manager node whenever it is started, set the 
Geode properties `jmx-manager-start=true` and `jmx-manager=true` in the 
server's gemfire.properties file. Note that both of these properties must be 
set to true for the node.
+
+To start the member as a JMX Manager node on the command line, provide`        
             --J=-Dgemfire.jmx-manager-start=true and 
--J=-Dgemfire.jmx-manager=true` as arguments to either the `start server` or 
`start                     locator` command.
+
+For example, to start a server as a JMX Manager on the gfsh command line:
+
+``` pre
+gfsh>start server --name=<server-name> --J=-Dgemfire.jmx-manager=true \
+--J=-Dgemfire.jmx-manager-start=true
+```
+
+By default, any locator can become a JMX Manager when started. When you start 
up a locator, if no other JMX Manager is detected in the distributed system, 
the locator starts one automatically. If you start a second locator, it will 
detect the current JMX Manager and will not start up another JMX Manager unless 
the second locator's `gemfire.jmx-manager-start` property is set to true.
+
+For most deployments, you only need to have one JMX Manager per distributed 
system. However, you can run more than JMX Manager if necessary. If you want to 
provide high-availability and redundancy for the Pulse monitoring tool, or if 
you are running additional JMX clients other than gfsh, then use the 
`jmx-manager-start=true` property to force individual nodes (either locators or 
servers) to become JMX Managers at startup. Since there is some performance 
overhead to being a JMX Manager, we recommend using locators as JMX Managers. 
If you do not want a locator to become a JMX manager, then you must use the 
`jmx-manager=false` property when you start the locator.
+
+After the node becomes a JMX Manager, all other `jmx-manager-*` configuration 
properties listed in [Configuring a JMX 
Manager](jmx_manager_operations.html#topic_263072624B8D4CDBAD18B82E07AA44B6) 
are applied.
+
+The following is an example of starting a new locator that also starts an 
embedded JMX Manager (after detecting that another JMX Manager does not exist). 
In addition, `gfsh` also automatically connects you to the new JMX Manager. For 
example:
+
+``` pre
+gfsh>start locator --name=locator1
+Starting a GemFire Locator in /home/user/test2/locator1...
+............................................
+Locator in /home/user/test2/locator1 on ubuntu.local[10334] as locator1 is 
currently online.
+Process ID: 2081
+Uptime: 30 seconds
+GemFire Version: 8.0.0
+Java Version: 1.7.0_65
+Log File: /home/user/test2/locator1/locator1.log
+JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false
+-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+Class-Path: 
/home/user/Pivotal_GemFire_800_b48319_Linux/lib/locator-dependencies.jar:/usr/local/java/lib/tools.jar
+
+Successfully connected to: [host=ubuntu.local, port=1099]
+
+Cluster configuration service is up and running.
+```
+
+Or you can enter the command directly in your terminal:
+
+``` pre
+$ gfsh start locator --name=locator1
+....................................
+Locator in /home/user/locator1 on ubuntu.local[10334] as locator1 is currently 
online.
+Process ID: 2359
+Uptime: 21 seconds
+GemFire Version: 8.0.0
+Java Version: 1.7.0_65
+Log File: /home/user/locator1/locator1.log
+JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false
+ -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
+Class-Path: 
/home/user/Pivotal_GemFire_800_b48319_Linux/lib/locator-dependencies.jar:/usr/local/java/lib/tools.jar
+
+Successfully connected to: [host=ubuntu.local, port=1099]
+
+Cluster configuration service is up and running.
+```
+
+Locators also keep track of all nodes that can become a JMX Manager.
+
+Immediately after creating its cache, the JMX Manager node begins federating 
the MBeans from other members. After the JMX Manager node is ready, the JMX 
Manager node sends a notification to all other members informing them that it 
is a new JMX Manager. The other members then put complete MBean states for 
themselves into each of their hidden management regions.
+
+At any point, you can determine whether a node is a JMX Manager by using the 
MemberMXBean isManager() method.
+
+Using the Java API, any managed node that has been configured with 
`jmx-manager=true` can also be turned into a JMX Manager Node by invoking the 
ManagementService startManager() method.
+
+**Note:**
+If you start the JMX Manager programmatically and wish to enable command 
processing, you must also add the absolute path of `gfsh-dependencies.jar` 
(located in `$GEMFIRE/lib` of your Geode installation) to the CLASSPATH of your 
application. Do not copy this library to your CLASSPATH because this library 
refers to other dependencies in `$GEMFIRE/lib` by a relative path.
+
+## <a id="topic_263072624B8D4CDBAD18B82E07AA44B6" 
class="no-quick-link"></a>Configuring a JMX Manager
+
+In the `gemfire.properties` file, you configure a JMX manager as follows.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Property</th>
+<th>Description</th>
+<th>Default</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>http-service-port</td>
+<td>If non-zero, then Geode starts an embedded HTTP service that listens on 
this port. The HTTP service is used to host the Geode Pulse Web application. If 
you are hosting the Pulse web app on your own Web server, then disable this 
embedded HTTP service by setting this property to zero. Ignored if <code 
class="ph codeph">jmx-manager</code> is false.</td>
+<td>7070</td>
+</tr>
+<tr class="even">
+<td>http-service-bind-address</td>
+<td>If set, then the Geode member binds the embedded HTTP service to the 
specified address. If this property is not set but the HTTP service is enabled 
using <code class="ph codeph">http-service-port</code>, then Geode binds the 
HTTP service to the member's local address.</td>
+<td><em>not set</em></td>
+</tr>
+<tr class="odd">
+<td>jmx-manager</td>
+<td><p>If <code class="ph codeph">true</code> then this member can become a 
JMX Manager. All other <code class="ph codeph">jmx-manager-*</code> properties 
are used when it does become a JMX Manager. If this property is false then all 
other <code class="ph codeph">jmx-manager-*</code> properties are ignored.</p>
+<p>The default value is <code class="ph codeph">true</code> on 
locators.</p></td>
+<td>false (with Locator exception)</td>
+</tr>
+<tr class="even">
+<td>jmx-manager-access-file</td>
+<td><p>By default the JMX Manager allows full access to all MBeans by any 
client. If this property is set to the name of a file, then it can restrict 
clients to only reading MBeans; they cannot modify MBeans. The access level can 
be configured differently in this file for each user name defined in the 
password file. For more information about the format of this file see Oracle's 
documentation of the <code class="ph 
codeph">com.sun.management.jmxremote.access.file</code> system property. 
Ignored if <code class="ph codeph">jmx-manager</code> is false or if <code 
class="ph codeph">jmx-manager-port</code> is zero.</p></td>
+<td><em>not set</em></td>
+</tr>
+<tr class="odd">
+<td>jmx-manager-bind-address</td>
+<td>By default, the JMX Manager when configured with a port listens on all the 
local host's addresses. You can use this property to configure which particular 
IP address or host name the JMX Manager will listen on. This property is 
ignored if <code class="ph codeph">jmx-manager</code> is false or <code 
class="ph codeph">jmx-manager-port</code> is zero. This address also applies to 
the Geode Pulse server if you are hosting a Pulse web application.</td>
+<td><em>not set</em></td>
+</tr>
+<tr class="even">
+<td>jmx-manager-hostname-for-clients</td>
+<td>Hostname given to clients that ask the locator for the location of a JMX 
Manager. By default the IP address of the JMX Manager is used. However, for 
clients on a different network, you can configure a different hostname to be 
given to clients. Ignored if <code class="ph codeph">jmx-manager</code> is 
false or if <code class="ph codeph">jmx-manager-port</code> is zero.</td>
+<td><em>not set</em></td>
+</tr>
+<tr class="odd">
+<td>jmx-manager-password-file</td>
+<td>By default the JMX Manager allows clients without credentials to connect. 
If this property is set to the name of a file, only clients that connect with 
credentials that match an entry in this file will be allowed. Most JVMs require 
that the file is only readable by the owner. For more information about the 
format of this file see Oracle's documentation of the 
com.sun.management.jmxremote.password.file system property. Ignored if 
jmx-manager is false or if jmx-manager-port is zero. </td>
+<td><em>not set</em></td>
+</tr>
+<tr class="even">
+<td>jmx-manager-port</td>
+<td>Port on which this JMX Manager listens for client connections. If this 
property is set to zero, Geode does not allow remote client connections. 
Alternatively, use the standard system properties supported by the JVM for 
configuring access from remote JMX clients. Ignored if jmx-manager is false. 
The Default RMI port is 1099.</td>
+<td>1099</td>
+</tr>
+<tr class="odd">
+<td>jmx-manager-ssl</td>
+<td>If true and <code class="ph codeph">jmx-manager-port</code> is not zero, 
the JMX Manager accepts only SSL connections. The ssl-enabled property does not 
apply to the JMX Manager, but the other SSL properties do. This allows SSL to 
be configured for just the JMX Manager without needing to configure it for the 
other Geode connections. Ignored if <code class="ph codeph">jmx-manager</code> 
is false.</td>
+<td>false</td>
+</tr>
+<tr class="even">
+<td>jmx-manager-start</td>
+<td>If true, this member starts a JMX Manager when it creates a cache. In most 
cases you should not set this property to true because a JMX Manager is 
automatically started when needed on a member that sets <code class="ph 
codeph">jmx-manager</code> to true. Ignored if jmx-manager is false.</td>
+<td>false</td>
+</tr>
+<tr class="odd">
+<td>jmx-manager-update-rate</td>
+<td>The rate, in milliseconds, at which this member pushes updates to any JMX 
Managers. Currently this value should be greater than or equal to the <code 
class="ph codeph">statistic-sample-rate</code>. Setting this value too high 
causes <code class="ph codeph">gfsh</code> and Geode Pulse to see stale 
values.</td>
+<td>2000</td>
+</tr>
+</tbody>
+</table>
+
+## <a id="topic_5B6DF783A14241399DC25C6EE8D0048A" 
class="no-quick-link"></a>Stopping a JMX Manager
+
+To stop a JMX Manager using gfsh, simply shut down the locator or server 
hosting the JMX Manager.
+
+For a locator:
+
+``` pre
+gfsh>stop locator --dir=locator1
+Stopping Locator running in /home/user/test2/locator1 on ubuntu.local[10334] 
as locator1...
+Process ID: 2081
+Log File: /home/user/test2/locator1/locator1.log
+....
+No longer connected to ubuntu.local[1099].
+```
+
+For a server:
+
+``` pre
+gfsh>stop server --dir=server1
+Stopping Cache Server running in /home/user/test2/server1 ubuntu.local[40404] 
as server1...
+Process ID: 1156
+Log File: /home/user/test2/server1/server1.log
+....
+
+
+No longer connected to ubuntu.local[1099].
+```
+
+Notice that `gfsh` has automatically disconnected you from the stopped JMX 
Manager.
+
+To stop a JMX manager using the management API, use the ManagementService 
stopManager() method to stop a member from being a JMX Manager.
+
+When a Manager stops, it removes all federated MBeans from other members from 
its Platform MBeanServer. It also emits a notification to inform other members 
that it is no longer considered a JMX Manager.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/management/list_of_mbean_notifications.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/management/list_of_mbean_notifications.html.md.erb 
b/geode-docs/managing/management/list_of_mbean_notifications.html.md.erb
new file mode 100644
index 0000000..d1ffea1
--- /dev/null
+++ b/geode-docs/managing/management/list_of_mbean_notifications.html.md.erb
@@ -0,0 +1,65 @@
+---
+title: List of JMX MBean Notifications
+---
+<a id="mbean_notifications_list"></a>
+
+
+This topic lists all available JMX notifications emitted by Geode MBeans.
+
+Notifications are emitted by the following MBeans:
+
+-   **[MemberMXBean 
Notifications](list_of_mbean_notifications.html#reference_czt_hq2_vj)**
+
+-   **[MemberMXBean Gateway 
Notifications](list_of_mbean_notifications.html#reference_dzt_hq2_vj)**
+
+-   **[CacheServerMXBean 
Notifications](list_of_mbean_notifications.html#cacheservermxbean_notifications)**
+
+-   **[DistributedSystemMXBean 
Notifications](list_of_mbean_notifications.html#distributedsystemmxbean_notifications)**
+
+## <a id="reference_czt_hq2_vj" class="no-quick-link"></a>MemberMXBean 
Notifications
+
+| Notification Type                                   | Notification Source | 
Message                                                     |
+|-----------------------------------------------------|---------------------|-------------------------------------------------------------|
+| gemfire.distributedsystem.cache.region.created      | Member name or ID   | 
Region Created with Name &lt;Region Name&gt;                |
+| gemfire.distributedsystem.cache.region.closed       | Member name or ID   | 
Region Destroyed/Closed with Name &lt;Region Name&gt;       |
+| gemfire.distributedsystem.cache.disk.created        | Member name or ID   | 
DiskStore Created with Name &lt;DiskStore Name&gt;          |
+| gemfire.distributedsystem.cache.disk.closed         | Member name or ID   | 
DiskStore Destroyed/Closed with Name &lt;DiskStore Name&gt; |
+| gemfire.distributedsystem.cache.lockservice.created | Member name or ID   | 
LockService Created with Name &lt;LockService Name&gt;      |
+| gemfire.distributedsystem.cache.lockservice.closed  | Member name or ID   | 
Lockservice Closed with Name &lt;LockService Name&gt;       |
+| gemfire.distributedsystem.async.event.queue.created | Member name or ID   | 
Async Event Queue is Created in the VM                      |
+| gemfire.distributedsystem.cache.server.started      | Member name or ID   | 
Cache Server is Started in the VM                           |
+| gemfire.distributedsystem.cache.server.stopped      | Member name or ID   | 
Cache Server is stopped in the VM                           |
+| gemfire.distributedsystem.locator.started           | Member name or ID   | 
Locator is Started in the VM                                |
+
+## <a id="reference_dzt_hq2_vj" class="no-quick-link"></a>MemberMXBean Gateway 
Notifications
+
+| Notification Type                                  | Notification Source | 
Message                                           |
+|----------------------------------------------------|---------------------|---------------------------------------------------|
+| gemfire.distributedsystem.gateway.sender.created   | Member name or ID   | 
GatewaySender Created in the VM                   |
+| gemfire.distributedsystem.gateway.sender.started   | Member name or ID   | 
GatewaySender Started in the VM &lt;Sender Id&gt; |
+| gemfire.distributedsystem.gateway.sender.stopped   | Member name or ID   | 
GatewaySender Stopped in the VM &lt;Sender Id&gt; |
+| gemfire.distributedsystem.gateway.sender.paused    | Member name or ID   | 
GatewaySender Paused in the VM &lt;Sender Id&gt;  |
+| gemfire.distributedsystem.gateway.sender.resumed   | Member name or ID   | 
GatewaySender Resumed in the VM &lt;Sender Id&gt; |
+| gemfire.distributedsystem.gateway.receiver.created | Member name or ID   | 
GatewayReceiver Created in the VM                 |
+| gemfire.distributedsystem.gateway.receiver.started | Member name or ID   | 
GatewayReceiver Started in the VM                 |
+| gemfire.distributedsystem.gateway.receiver.stopped | Member name or ID   | 
GatewayReceiver Stopped in the VM                 |
+| gemfire.distributedsystem.cache.server.started     | Member name or ID   | 
Cache Server is Started in the VM                 |
+
+## <a id="cacheservermxbean_notifications" 
class="no-quick-link"></a>CacheServerMXBean Notifications
+
+| Notification Type                                    | Notification Source   
 | Message                                  |
+|------------------------------------------------------|------------------------|------------------------------------------|
+| gemfire.distributedsystem.cacheserver.client.joined  | CacheServer MBean 
Name | Client joined with Id &lt;Client ID&gt;  |
+| gemfire.distributedsystem.cacheserver.client.left    | CacheServer MBean 
Name | Client crashed with Id &lt;Client ID&gt; |
+| gemfire.distributedsystem.cacheserver.client.crashed | CacheServer MBean 
name | Client left with Id &lt;Client ID&gt;    |
+
+## <a id="distributedsystemmxbean_notifications" 
class="no-quick-link"></a>DistributedSystemMXBean Notifications
+
+| Notification Type                               | Notification Source        
                       | Message                                                
                    |
+|-------------------------------------------------|---------------------------------------------------|----------------------------------------------------------------------------|
+| gemfire.distributedsystem.cache.member.joined   | Name or ID of member who 
joined                   | Member Joined &lt;Member Name or ID&gt;              
                      |
+| gemfire.distributedsystem.cache.member.departed | Name or ID of member who 
departed                 | Member Departed &lt;Member Name or ID&gt; has 
crashed = &lt;true/false&gt; |
+| gemfire.distributedsystem.cache.member.suspect  | Name or ID of member who 
is suspected             | Member Suspected &lt;Member Name or ID&gt; By 
&lt;Who Suspected&gt;        |
+| system.alert.\*                                 | 
DistributedSystem("&lt;DistributedSystem ID"&gt;) | Alert Message               
                                               |
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/management/list_of_mbeans.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/management/list_of_mbeans.html.md.erb 
b/geode-docs/managing/management/list_of_mbeans.html.md.erb
new file mode 100644
index 0000000..c8a8ad5
--- /dev/null
+++ b/geode-docs/managing/management/list_of_mbeans.html.md.erb
@@ -0,0 +1,21 @@
+---
+title: List of Geode JMX MBeans
+---
+<a id="topic_4BCF867697C3456D96066BAD7F39FC8B"></a>
+
+
+This topic provides descriptions for the various management and monitoring 
MBeans that are available in Geode.
+
+The following diagram illustrates the relationship between the different JMX 
MBeans that have been developed to manage and monitor Apache Geode.
+
+<img src="../../images_svg/MBeans.svg" 
id="topic_4BCF867697C3456D96066BAD7F39FC8B__image_66525625D6804EDE9675D6CE509360A3"
 class="image" />
+
+-   **[JMX Manager MBeans](list_of_mbeans_full.html)**
+
+    This section describes the MBeans that are available on the JMX Manager 
node.
+
+-   **[Managed Node 
MBeans](list_of_mbeans_full.html#topic_48194A5BDF3F40F68E95A114DD702413)**
+
+    This section describes the MBeans that are available on all managed nodes.
+
+

Reply via email to