http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/multicast_communication_runtime_considerations.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/multicast_communication_runtime_considerations.html.md.erb
 
b/geode-docs/managing/monitor_tune/multicast_communication_runtime_considerations.html.md.erb
new file mode 100644
index 0000000..b8445f2
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/multicast_communication_runtime_considerations.html.md.erb
@@ -0,0 +1,30 @@
+---
+title:  Run-time Considerations for Multicast
+---
+
+When you use multicast for messaging and data distribution, you need to 
understand how the health monitoring setting works and how to control memory 
use.
+
+**Multicast Health Monitor**
+
+The Geode management and monitoring system is supplemented by a 
maxRetransmissionRatio health monitoring setting for distributed system 
members. This ratio is the number of retransmission requests received divided 
by the number of multicast datagrams written. If the ratio is at 1.0, the 
member is retransmitting as many packets as it originally sent. Retransmissions 
are point-to-point, and many processes may request retransmission, so this 
number can get quite high if problems occur. The default value for 
maxRetransmissionRatio is 0.2.
+
+For example, consider a distributed system with one producer and two consumers 
of cache events using multicast to transmit cache updates. The new member is 
added, which is running on a machine without multicast enabled. As a result, 
there is a retransmission request for every cache update, and the 
maxRetransmissionRatio changes to 1.0.
+
+**Controlling Memory Use on Geode Hosts with Multicast**
+
+Running out of memory can impede a member’s performance and eventually lead 
to severe errors.
+
+When data is distributed over multicast, Geode incurs a fixed overhead of 
memory reserved for transmission buffers. A specified amount of memory is 
reserved for each distributed region. These producer-side buffers are used only 
when a receiver is not getting enough CPU to read from its own receiving buffer 
as quickly as the producer is sending. In this case, the receiver complains of 
lost data. The producer then retrieves the data, if it still exists in its 
buffer, and resends to the receiver.
+
+Tuning the transmission buffers requires a careful balance. Larger buffers 
mean that more data remains available for retransmission, providing more 
protection in case of a problem. On the other hand, a larger amount of reserved 
memory means that less memory is available for caching.
+
+You can adjust the transmission buffer size by resetting the 
mcast-send-buffer-size parameter in the `gemfire.properties` file:
+
+``` pre
+mcast-send-buffer-size=45000
+```
+
+**Note:**
+The maximum buffer size is constrained only by the limits of your system. If 
you are not seeing problems that could be related to lack of memory then do not 
change the default, since it provides greater protection in case of network 
problems.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/multicast_communication_testing_multicast_speed_limits.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/multicast_communication_testing_multicast_speed_limits.html.md.erb
 
b/geode-docs/managing/monitor_tune/multicast_communication_testing_multicast_speed_limits.html.md.erb
new file mode 100644
index 0000000..d339a55
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/multicast_communication_testing_multicast_speed_limits.html.md.erb
@@ -0,0 +1,128 @@
+---
+title:  Testing Multicast Speed Limits
+---
+
+TCP automatically adjusts its speed to the capability of the processes using 
it and enforces bandwidth sharing so that every process gets a turn. With 
multicast, you must determine and explicitly set those limits.
+
+<a id="multicast__section_AB06591284DB4E9785EE79FBE1C59554"></a>
+Without the proper configuration, multicast delivers its traffic as fast as 
possible, overrunning the ability of consumers to process the data and locking 
out other processes that are waiting for the bandwidth. You can tune your 
multicast and unicast behavior using mcast-flow-control in `gemfire.properties`.
+
+**Using Iperf**
+
+Iperf is an open-source TCP/UDP performance tool that you can use to find your 
site’s maximum rate for data distribution over multicast. Iperf can be 
downloaded from web sites such as the National Laboratory for Applied Network 
Research (NLANR).
+
+Iperf measures maximum bandwidth, allowing you to tune parameters and UDP 
characteristics. Iperf reports statistics on bandwidth, delay jitter, and 
datagram loss. On Linux, you can redirect this output to a file; on Windows, 
use the -o filename parameter.
+
+Run each test for ten minutes to make sure any potential problems have a 
chance to develop. Use the following command lines to start the sender and 
receivers.
+
+**Sender**:
+
+``` pre
+iperf -c 192.0.2.0 -u -T 1 -t 100 -i 1 -b 1000000000
+```
+
+where:
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<tbody>
+<tr class="odd">
+<td>-c address</td>
+<td><p>Run in client mode and connect to a multicast address</p></td>
+</tr>
+<tr class="even">
+<td>-u</td>
+<td><p>Use UDP</p></td>
+</tr>
+<tr class="odd">
+<td>-T #</td>
+<td><p>Multicast time-to-live: number of subnets across which a multicast 
packet can travel before the routers drop the packet</p></td>
+</tr>
+</tbody>
+</table>
+
+**Note:**
+Do not set the -T parameter above 1 without consulting your network 
administrator. If this number is too high then the iperf traffic could 
interfere with production applications or continue out onto the internet.
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<tbody>
+<tr class="odd">
+<td>-t</td>
+<td><p>Length of time to transmit, in seconds</p></td>
+</tr>
+<tr class="even">
+<td>-i</td>
+<td><p>Time between periodic bandwidth reports, in seconds</p></td>
+</tr>
+<tr class="odd">
+<td>-b</td>
+<td>Sending bandwidth, in bits per second</td>
+</tr>
+</tbody>
+</table>
+
+**Receiver**:
+
+``` pre
+iperf -s -u -B 192.0.2.0 -i 1
+```
+
+where:
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<tbody>
+<tr class="odd">
+<td><p>-s</p></td>
+<td><p>Run in server mode</p></td>
+</tr>
+<tr class="even">
+<td><p>-u</p></td>
+<td><p>Use UDP</p></td>
+</tr>
+<tr class="odd">
+<td><p>-B address</p></td>
+<td><p>Bind to a multicast address</p></td>
+</tr>
+<tr class="even">
+<td>-i #</td>
+<td>Time between periodic bandwidth reports, in seconds</td>
+</tr>
+</tbody>
+</table>
+
+**Note:**
+If your Geode distributed system runs across several subnets, start a receiver 
on each subnet.
+
+In the receiver’s output, look at the Lost/Total Datagrams columns for the 
number and percentage of lost packets out of the total sent.
+
+**Output From Iperf Testing**:
+
+``` pre
+[    ID] Interval     Transfer    Bandwidth   Jitter  Lost/Total Datagrams
+[    3] 0.0- 1.0 sec     129 KBytes  1.0 Mbits/sec  0.778 ms     61/    151 
(40%)
+[    3] 1.0- 2.0 sec     128 KBytes  1.0 Mbits/sec  0.236 ms     0/  89 (0%)
+[    3] 2.0- 3.0 sec     128 KBytes  1.0 Mbits/sec  0.264 ms     0/  89 (0%)
+[    3] 3.0- 4.0 sec     128 KBytes  1.0 Mbits/sec  0.248 ms     0/  89 (0%)
+[    3] 0.0- 4.3 sec     554 KBytes  1.0 Mbits/sec  0.298 ms     61/    447 
(14%)
+```
+
+Rerun the test at different bandwidths until you find the maximum useful 
multicast rate. Start high, then gradually decrease the send rate until the 
test runs consistently with no packet loss. For example, you might need to run 
five tests in a row, changing the -b (bits per second) parameter each time 
until there is no loss:
+
+1.  -b 1000000000 (loss)
+2.  -b 900000000 (no loss)
+3.  -b 950000000 (no loss)
+4.  -b 980000000 (a bit of loss)
+5.  -b 960000000 (no loss)
+
+Enter iperf -h to see all of the command-line options. For more information, 
see the Iperf user manual.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/multicast_communication_troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/multicast_communication_troubleshooting.html.md.erb
 
b/geode-docs/managing/monitor_tune/multicast_communication_troubleshooting.html.md.erb
new file mode 100644
index 0000000..9e99981
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/multicast_communication_troubleshooting.html.md.erb
@@ -0,0 +1,21 @@
+---
+title:  Troubleshooting the Multicast Tuning Process
+---
+
+Several problems may arise during the initial testing and tuning process for 
multicasting.
+
+**Some or All Members Cannot Communicate**
+
+If your applications and cache servers cannot talk to each other, even though 
they are configured correctly, you may not have multicast connectivity on your 
network. It’s common to have unicast connectivity, but not multicast 
connectivity. See your network administrator.
+
+**Multicast Is Slower Than Expected**
+
+Look for an Ethernet flow control limit. If you have mixed-speed networks that 
result in a multicast flooding problem, the Ethernet hardware may be trying to 
slow down the fast traffic.
+
+Make sure your network hardware can deal with multicast traffic and route it 
efficiently. Some network hardware designed to handle multicast does not 
perform well enough to support a full-scale production system.
+
+**Multicast Fails Unexpectedly**
+
+If you find through testing that multicast fails above a round number, for 
example, it works up to 100 Mbps and fails at all rates over that, suspect that 
it is failing because it exceeds the network rate. This problem often arises at 
sites where one of the secondary LANs is slower than the main network
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/performance_controls.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/performance_controls.html.md.erb 
b/geode-docs/managing/monitor_tune/performance_controls.html.md.erb
new file mode 100644
index 0000000..79269a0
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/performance_controls.html.md.erb
@@ -0,0 +1,29 @@
+---
+title:  Performance Controls
+---
+
+This topic provides tuning suggestions of particular interest to developers, 
primarily programming techniques and cache configuration.
+
+Before you begin, you should understand Apache Geode [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
+
+-   **[Data 
Serialization](../../managing/monitor_tune/performance_controls_data_serialization.html)**
+
+    In addition to standard Java serialization, Geode offers serialization 
options that give you higher performance and greater flexibility for data 
storage, transfers, and language types.
+
+-   **[Setting Cache 
Timeouts](../../managing/monitor_tune/performance_controls_setting_cache_timeouts.html)**
+
+    Cache timeout properties can modified through the gfsh `alter runtime` 
command (or declared in the `cache.xml` file) and can also be set through 
methods of the interface, `org.apache.geode.cache.Cache`.
+
+-   **[Controlling Socket 
Use](../../managing/monitor_tune/performance_controls_controlling_socket_use.html)**
+
+    For peer-to-peer communication, you can manage socket use at the system 
member level and at the thread level.
+
+-   **[Management of Slow 
Receivers](../../managing/monitor_tune/performance_controls_managing_slow_receivers.html)**
+
+    You have several options for handling slow members that receive data 
distribution. The slow receiver options control only to peer-to-peer 
communication between distributed regions using TCP/IP. This topic does not 
apply to client/server or multi-site communication, or to communication using 
the UDP unicast or IP multicast protocols.
+
+-   **[Increasing the Ratio of Cache 
Hits](../../managing/monitor_tune/performance_controls_increasing_cache_hits.html)**
+
+    The more frequently a get fails to find a valid value in the first cache 
and has to try a second cache, the more the overall performance is affected.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/performance_controls_controlling_socket_use.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/performance_controls_controlling_socket_use.html.md.erb
 
b/geode-docs/managing/monitor_tune/performance_controls_controlling_socket_use.html.md.erb
new file mode 100644
index 0000000..4e04445
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/performance_controls_controlling_socket_use.html.md.erb
@@ -0,0 +1,34 @@
+---
+title:  Controlling Socket Use
+---
+
+For peer-to-peer communication, you can manage socket use at the system member 
level and at the thread level.
+
+The conserve-sockets setting indicates whether application threads share 
sockets with other threads or use their own sockets for distributed system 
member communication. This setting has no effect on communication between a 
server and its clients, but it does control the server’s communication with 
its peers or a gateway sender's communication with a gateway receiver. In 
client/server settings in particular, where there can be a large number of 
clients for each server, controlling peer-to-peer socket use is an important 
part of tuning server performance.
+
+You configure conserve-sockets for the member as a whole in 
`gemfire.properties`. Additionally, you can change the sockets conservation 
policy for the individual thread through the API.
+
+When conserve-sockets is set to false, each application thread uses a 
dedicated thread to send to each of its peers and a dedicated thread to receive 
from each peer. Disabling socket conservation requires more system resources, 
but can potentially improve performance by removing socket contention between 
threads and optimizing distributed ACK operations. For distributed regions, the 
put operation, and destroy and invalidate for regions and entries, can all be 
optimized with conserve-sockets set to false. For partitioned regions, setting 
conserve-sockets to false can improve general throughput.
+
+**Note:**
+When you have transactions operating on EMPTY, NORMAL or PARTITION regions, 
make sure that `conserve-sockets` is set to false to avoid distributed 
deadlocks.
+
+You can override the `conserve-sockets` setting for individual threads. These 
methods are in `org.apache.geode.distributed.DistributedSystem`:
+
+-   `setThreadsSocketPolicy`. Sets the calling thread’s individual socket 
policy, overriding the policy set for the application as a whole. If set to 
true, the calling thread shares socket connections with other threads. If 
false, the calling thread has its own sockets.
+-   `releaseThreadsSockets`. Frees any sockets held by the calling thread. 
Threads hold their own sockets only when conserve-sockets is false. Threads 
holding their own sockets can call this method to avoid holding the sockets 
until the socket-lease-time has expired.
+
+A typical implementation might set conserve-sockets to true at the application 
level and then override the setting for the specific application threads that 
perform the bulk of the distributed operations. The example below shows an 
implementation of the two API calls in a thread that performs benchmark tests. 
The example assumes the class implements Runnable. Note that the invocation, 
setThreadsSocketPolicy(false), is only meaningful if conserve-sockets is set to 
true at the application level.
+
+``` pre
+public void run() {
+    DistributedSystem.setThreadsSocketPolicy(false);
+    try {
+        // do your benchmark work
+    } finally {
+        DistributedSystem.releaseThreadsSockets();
+    }
+}
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/performance_controls_data_serialization.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/performance_controls_data_serialization.html.md.erb
 
b/geode-docs/managing/monitor_tune/performance_controls_data_serialization.html.md.erb
new file mode 100644
index 0000000..d393eb3
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/performance_controls_data_serialization.html.md.erb
@@ -0,0 +1,9 @@
+---
+title:  Data Serialization
+---
+
+In addition to standard Java serialization, Geode offers serialization options 
that give you higher performance and greater flexibility for data storage, 
transfers, and language types.
+
+Under *Developing with Apache Geode*, see [Data 
Serialization](../../developing/data_serialization/chapter_overview.html#data_serialization).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/performance_controls_increasing_cache_hits.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/performance_controls_increasing_cache_hits.html.md.erb
 
b/geode-docs/managing/monitor_tune/performance_controls_increasing_cache_hits.html.md.erb
new file mode 100644
index 0000000..58fcb27
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/performance_controls_increasing_cache_hits.html.md.erb
@@ -0,0 +1,11 @@
+---
+title:  Increasing the Ratio of Cache Hits
+---
+
+The more frequently a get fails to find a valid value in the first cache and 
has to try a second cache, the more the overall performance is affected.
+
+A common cause of misses is expiration or eviction of the entry. If you have a 
region’s entry expiration or eviction enabled, monitor the region and entry 
statistics.
+
+If you see a high ratio of misses to hits on the entries, consider increasing 
the expiration times or the maximum values for eviction, if possible. See 
[Eviction](../../developing/eviction/chapter_overview.html) for more 
information.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/performance_controls_managing_slow_receivers.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/performance_controls_managing_slow_receivers.html.md.erb
 
b/geode-docs/managing/monitor_tune/performance_controls_managing_slow_receivers.html.md.erb
new file mode 100644
index 0000000..fbbd329
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/performance_controls_managing_slow_receivers.html.md.erb
@@ -0,0 +1,56 @@
+---
+title:  Management of Slow Receivers
+---
+
+You have several options for handling slow members that receive data 
distribution. The slow receiver options control only to peer-to-peer 
communication between distributed regions using TCP/IP. This topic does not 
apply to client/server or multi-site communication, or to communication using 
the UDP unicast or IP multicast protocols.
+
+Most of the options for handling slow members are related to on-site 
configuration during system integration and tuning. For this information, see 
[Slow Receivers with TCP/IP](slow_receivers.html).
+
+Slowing is more likely to occur when applications run many threads, send large 
messages (due to large entry values), or have a mix of region configurations.
+
+**Note:**
+If you are experiencing slow performance and are sending large objects 
(multiple megabytes), before implementing these slow receiver options make sure 
your socket buffer sizes are large enough for the objects you distribute. The 
socket buffer size is set using gemfire.socket-buffer-size.
+
+By default, distribution between system members is performed synchronously. 
With synchronous communication, when one member is slow to receive, it can 
cause its producer members to slow down as well. This can lead to general 
performance problems in the distributed system.
+
+The specifications for handling slow receipt primarily affect how your members 
manage distribution for regions with distributed-no-ack scope, but it can 
affect other distributed scopes as well. If no regions have distributed-no-ack 
scope, this mechanism is unlikely to kick in at all. When slow receipt handling 
does kick in, however, it affects all distribution between the producer and 
consumer, regardless of scope. Partitioned regions ignore the scope attribute, 
but for the purposes of this discussion you should think of them as having an 
implicit distributed-ack scope.
+
+**Configuration Options**
+
+The slow receiver options are set in the producer member’s region attribute, 
enable-async-conflation, and in the consumer member’s async\* 
`gemfire.properties` settings.
+
+**Delivery Retries**
+
+If the receiver fails to receive a message, the sender continues to attempt to 
deliver the message as long as the receiving member is still in the distributed 
system. During the retry cycle, throws warnings that include this string:
+
+``` pre
+will reattempt
+```
+
+The warnings are followed by an info message when the delivery finally 
succeeds.
+
+**Asynchronous Queueing For Slow Receivers**
+
+Your consumer members can be configured so that their producers switch to 
asynchronous messaging if the consumers are slow to respond to cache message 
distribution.
+
+When a producer switches, it creates a queue to hold and manage that 
consumer’s cache messages. When the queue empties, the producer switches back 
to synchronous messaging for the consumer. The settings that cause the 
producers to switch are specified on the consumer side in `gemfire.properties` 
file settings.
+
+If you configure your consumers for slow receipt queuing, and your region 
scope is distributed-no-ack, you can also configure the producer to conflate 
entry update messages in its queues. This configuration option is set as the 
region attribute enable-async-conflation. By default distributed-no-ack entry 
update messages are not conflated.
+
+Depending on the application, conflation can greatly reduce the number of 
messages the producer needs to send to the consumer. With conflation, when an 
entry update is added to the queue, if the last operation queued for that key 
is also an update operation, the previously enqueued update is removed, leaving 
only the latest update to be sent to the consumer. Only entry update messages 
originating in a region with distributed-no-ack scope are conflated. Region 
operations and entry operations other than updates are not conflated.
+
+<img src="../../images_svg/async_system_queue_conflation.svg" 
id="perf__image_0FD90F27762F4440B9ECC40803988038" class="image" />
+
+Some conflation may not occur because entry updates are sent to the consumer 
before they can be conflated. For this example, assume no messages are sent 
while the update for Key A is added.
+
+**Note:**
+This method of conflation behaves the same as server-to-client conflation.
+
+You can enable queue conflation on a region-by-region basis. You should always 
enable it unless it is incompatible with your application needs. Conflation 
reduces the amount of data queued and distributed.
+
+These are reasons why conflation might not work for your application:
+
+-   With conflation, earlier entry updates are removed from the queue and 
replaced by updates sent later in the queue. This is problematic for 
applications that depend on a specific ordering of entry modifications. For 
example, if your receiver has a CacheListener that needs to know about every 
state change, you should disable conflation.
+-   If your queue remains in use for a significant period and you have entries 
that are updated frequently, you could have a series of update message 
replacements resulting in a notable delay in the arrival of any update for some 
entries. Imagine that update 1, before it is sent, is removed in favor of a 
later update 2. Then, before update 2 can be sent, it is removed in favor of 
update 3, and so on. This could result in unacceptably stale data on the 
receiver.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/performance_controls_setting_cache_timeouts.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/performance_controls_setting_cache_timeouts.html.md.erb
 
b/geode-docs/managing/monitor_tune/performance_controls_setting_cache_timeouts.html.md.erb
new file mode 100644
index 0000000..bb84a5f
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/performance_controls_setting_cache_timeouts.html.md.erb
@@ -0,0 +1,24 @@
+---
+title:  Setting Cache Timeouts
+---
+
+Cache timeout properties can modified through the gfsh `alter runtime` command 
(or declared in the `cache.xml` file) and can also be set through methods of 
the interface, `org.apache.geode.cache.Cache`.
+
+To modify cache timeout properties, you can issue the following `gfsh alter    
         runtime` command. For example:
+
+``` pre
+gfsh>alter runtime --search-timeout=150
+```
+
+The `--search-timeout` parameter specifies how long a netSearch operation can 
wait for data before timing out. The default is 5 minutes. You may want to 
change this based on your knowledge of the network load or other factors.
+
+The next two configurations describe timeout settings for locking in regions 
with global scope. Locking operations can time out in two places: when waiting 
to obtain a lock (lock time out); and when holding a lock (lock lease time). 
Operations that modify objects in a global region use automatic locking. In 
addition, you can manually lock a global region and its entries through 
`org.apache.geode.cache.Region`. The explicit lock methods provided by the APIs 
allow you to specify a lock timeout parameter. The lock time out for implicit 
operations and the lock lease time for implicit and explicit operations are 
governed by these cache-wide settings:
+
+``` pre
+gfsh>alter runtime --lock-timeout=30 --lock-lease=60
+```
+
+-   `--lock-timeout`. Timeout for object lock requests, specified in seconds. 
The setting affects automatic locking only, and does not apply to manual 
locking. The default is 1 minute. If a lock request does not return before the 
specified timeout period, it is cancelled and returns with a failure.
+-   `--lock-lease`. Timeout for object lock leases, specified in seconds. The 
setting affects both automatic locking and manual locking. The default is 2 
minutes. Once a lock is obtained, it may remain in force for the lock lease 
time period before being automatically cleared by the system.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/slow_messages.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/slow_messages.html.md.erb 
b/geode-docs/managing/monitor_tune/slow_messages.html.md.erb
new file mode 100644
index 0000000..481c396
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/slow_messages.html.md.erb
@@ -0,0 +1,21 @@
+---
+title:  Slow distributed-ack Messages
+---
+
+In systems with distributed-ack regions, a sudden large number of 
distributed-no-ack operations can cause distributed-ack operations to take a 
long time to complete.
+
+The `distributed-no-ack` operations can come from anywhere. They may be 
updates to `distributed-no-ack` regions or they may be other 
`distributed-no-ack` operations, like destroys, performed on any region in the 
cache, including the `distributed-ack` regions.
+
+The main reasons why a large number of `distributed-no-ack` messages may delay 
`distributed-ack` operations are:
+
+-   For any single socket connection, all operations are executed serially. If 
there are any other operations buffered for transmission when a 
`distributed-ack` is sent, the `distributed-ack` operation must wait to get to 
the front of the line before being transmitted. Of course, the operation’s 
calling process is also left waiting.
+-   The `distributed-no-ack` messages are buffered by their threads before 
transmission. If many messages are buffered and then sent to the socket at 
once, the line for transmission might be very long.
+
+You can take these steps to reduce the impact of this problem:
+
+1.  If you’re using TCP, check whether you have socket conservation enabled 
for your members. It is configured by setting the Geode property 
`conserve-sockets` to true. If enabled, each application’s threads will share 
sockets unless you override the setting at the thread level. Work with your 
application programmers to see whether you might disable sharing entirely or at 
least for the threads that perform `distributed-ack` operations. These include 
operations on `distributed-ack` regions and also `netSearches` performed on 
regions of any distributed scope. (Note: `netSearch` is only performed on 
regions with a data-policy of empty, normal and preloaded.) If you give each 
thread that performs `distributed-ack` operations its own socket, you 
effectively let it scoot to the front of the line ahead of the 
`distributed-no-ack` operations that are being performed by other threads. The 
thread-level override is done by calling the 
`DistributedSystem.setThreadsSocketPolicy(false)` metho
 d.
+2.  Reduce your buffer sizes to slow down the distributed-no-ack operations. 
These changes slow down the threads performing distributed-no-ack operations 
and allow the thread doing the distributed-ack operations to be sent in a more 
timely manner.
+    -   If you're using UDP (you either have multicast enabled regions or have 
set `disable-tcp` to true in gemfire.properties), consider reducing the 
byteAllowance of mcast-flow-control to something smaller than the default of 
3.5 megabytes.
+    -   If you're using TCP/IP, reduce the `socket-buffer-size` in 
gemfire.properties.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/slow_receivers.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/slow_receivers.html.md.erb 
b/geode-docs/managing/monitor_tune/slow_receivers.html.md.erb
new file mode 100644
index 0000000..8960d913
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/slow_receivers.html.md.erb
@@ -0,0 +1,17 @@
+---
+title:  Slow Receivers with TCP/IP
+---
+
+You have several options for preventing situations that can cause slow 
receivers of data distributions. The slow receiver options control only 
peer-to-peer communication using TCP/IP. This discussion does not apply to 
client/server or multi-site communication, or to communication using the UDP 
unicast or multicast protocols.
+
+Before you begin, you should understand Geode [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
+
+-   **[Preventing Slow 
Receivers](../../managing/monitor_tune/slow_receivers_preventing_problems.html)**
+
+    During system integration, you can identify and eliminate potential causes 
of slow receivers in peer-to-peer communication.
+
+-   **[Managing Slow 
Receivers](../../managing/monitor_tune/slow_receivers_managing.html)**
+
+    If the receiver fails to receive a message, the sender continues to 
attempt to deliver the message as long as the receiving member is still in the 
distributed system.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/slow_receivers_managing.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/slow_receivers_managing.html.md.erb 
b/geode-docs/managing/monitor_tune/slow_receivers_managing.html.md.erb
new file mode 100644
index 0000000..9bc4302
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/slow_receivers_managing.html.md.erb
@@ -0,0 +1,99 @@
+---
+title:  Managing Slow Receivers
+---
+
+If the receiver fails to receive a message, the sender continues to attempt to 
deliver the message as long as the receiving member is still in the distributed 
system.
+
+During the retry cycle, Geode throws warnings that include this string:
+
+``` pre
+will reattempt
+```
+
+The warnings are followed by an informational message when the delivery 
finally succeeds.
+
+For distributed regions, the scope of a region determines whether distribution 
acknowledgments and distributed synchronization are required. Partitioned 
regions ignore the scope attribute, but for the purposes of this discussion you 
should think of them as having an implicit distributed-ack scope.
+
+By default, distribution between system members is performed synchronously. 
With synchronous communication, when one member is slow to receive, it can 
cause its producers to slow down as well. This, of course, can lead to general 
performance problems in the distributed system.
+
+If you are experiencing slow performance and are sending large objects 
(multiple megabytes), before implementing these slow receiver options make sure 
your socket buffer sizes are appropriate for the size of the objects you 
distribute. The socket buffer size is set using socket-buffer-size in the 
`gemfire.properties` file.
+
+**Managing Slow distributed-no-ack Receivers**
+
+You can configure your consumer members so their messages are queued 
separately when they are slow to respond. The queueing happens in the producer 
members when the producers detect slow receipt and allows the producers to keep 
sending to other consumers at a normal rate. Any member that receives data 
distribution can be configured as described in this section.
+
+The specifications for handling slow receipt primarily affect how your members 
manage distribution for regions with distributed-no-ack scope, where 
distribution is asynchronous, but the specifications can affect other 
distributed scopes as well. If no regions have distributed-no-ack scope, the 
mechanism is unlikely to kick in at all. When slow receipt handling does kick 
in, however, it affects all distribution between the producer and that 
consumer, regardless of scope.
+
+**Note:**
+These slow receiver options are disabled in systems using SSL. See 
[SSL](../security/ssl_overview.html).
+
+Each consumer member determines how its own slow behavior is to be handled by 
its producers. The settings are specified as distributed system connection 
properties. This section describes the settings and lists the associated 
properties.
+
+-   async-distribution-timeout—The distribution timeout specifies how long 
producers are to wait for the consumer to respond to synchronous messaging 
before switching to asynchronous messaging with that consumer. When a producer 
switches to asynchronous messaging, it creates a queue for that consumer’s 
messages and a separate thread to handle the communication. When the queue 
empties, the producer automatically switches back to synchronous communication 
with the consumer. These settings affect how long your producer’s cache 
operations might block. The sum of the timeouts for all consumers is the 
longest time your producer might block on a cache operation.
+-   async-queue-timeout—The queue timeout sets a limit on the length of time 
the asynchronous messaging queue can exist without a successful distribution to 
the slow receiver. When the timeout is reached, the producer asks the consumer 
to leave the distributed system.
+-   async-max-queue-size—The maximum queue size limits the amount of memory 
the asynchronous messaging queue can consume. When the maximum is reached, the 
producer asks the consumer to leave the distributed system.
+
+**Configuring Async Queue Conflation**
+
+When the scope is distributed-no-ack scope, you can configure the producer to 
conflate entry update messages in its queues, which may further speed 
communication. By default, distributed-no-ack entry update messages are not 
conflated. The configuration is set in the producer at the region level.
+
+**Forcing the Slow Receiver to Disconnect**
+
+If either of the queue timeout or maximum queue size limits is reached, the 
producer sends the consumer a high-priority message (on a different TCP 
connection than the connection used for cache messaging) telling it to 
disconnect from the distributed system. This prevents growing memory 
consumption by the other processes that are queuing changes for the slow 
receiver while they wait for that receiver to catch up. It also allows the slow 
member to start fresh, possibly clearing up the issues that were causing it to 
run slowly.
+
+When a producer gives up on a slow receiver, it logs one of these types of 
warnings:
+
+-   Blocked for time ms which is longer than the max of asyncQueueTimeout ms 
so asking slow receiver slow\_receiver\_ID to disconnect.
+-   Queued bytes exceed max of asyncMaxQueueSize so asking slow receiver 
slow\_receiver\_ID to disconnect.
+
+When a process disconnects after receiving a request to do so by a producer, 
it logs a warning message of this type:
+
+-   Disconnect forced by producer because we were too slow.
+
+These messages only appear in your logs if logging is enabled and the log 
level is set to a level that includes warning (which it does by default). See 
[Logging](../logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
+
+If your consumer is unable to receive even high priority messages, only the 
producer’s warnings will appear in the logs. If you see only producer 
warnings, you can restart the consumer process. Otherwise, the Geode failure 
detection code will eventually cause the member to leave the distributed system 
on its own.
+
+**Use Cases**
+
+These are the main use cases for the slow receiver specifications:
+
+-   Message bursts—With message bursts, the socket buffer can overflow and 
cause the producer to block. To keep from blocking, first make sure your socket 
buffer is large enough to handle a normal number of messages (using the 
socket-buffer-size property), then set the async distribution timeout to 1. 
With this very low distribution timeout, when your socket buffer does fill up, 
the producer quickly switches to async queueing. Use the distribution 
statistics, asyncQueueTimeoutExceeded and asyncQueueSizeExceeded, to make sure 
your queue settings are high enough to avoid forcing unwanted disconnects 
during message bursts.
+-   Unhealthy or dead members—When members are dead or very unhealthy, they 
may not be able to communicate with other distributed system members. The slow 
receiver specifications allow you to force crippled members to disconnect, 
freeing up resources and possibly allowing the members to restart fresh. To 
configure for this, set the distribution timeout high (one minute), and set the 
queue timeout low. This is the best way to avoid queueing for momentary 
slowness, while still quickly telling very unhealthy members to leave the 
distributed system.
+-   Combination message bursts and unhealthy members—To configure for both 
of the above situations, set the distribution timeout low and the queue timeout 
high, as for the message bursts scenario.
+
+**Managing Slow distributed-ack Receivers**
+
+When using a distribution scope other than distributed-no-ack, alerts are 
issued for slow receivers. A member that isn’t responding to messages may be 
sick, slow, or missing. Sick or slow members are detected in message 
transmission and reply-wait processing code, triggering a warning alert first. 
If a member still isn’t responding, a severe warning alert is issued, 
indicating that the member may be disconnected from the distributed system. 
This alert sequence is enabled by setting the ack-wait-threshold and the 
ack-severe-alert-threshold to some number of seconds.
+
+When ack-severe-alert-threshold is set, regions are configured to use ether 
distributed-ack or global scope, or use the partition data policy. Geode will 
wait for a total of ack-wait-threshold seconds for a response to a cache 
operation, then it logs a warning alert ("Membership: requesting removal of 
entry(\#). Disconnected as a slow-receiver"). After waiting an additional 
ack-severe-alert-threshold seconds after the first threshold is reached, the 
system also informs the failure detection mechanism that the receiver is 
suspect and may be disconnected, as shown in the following figure.
+
+<img src="../../images_svg/member_severe_alert.svg" 
id="slow_recv__image_BA474143B16744F28DE0AB1CAD00FB48" class="image" />
+The events occur in this order:
+
+1.  CACHE\_OPERATION - transmission of cache operation is initiated.
+2.  SUSPECT - identified as a suspect by ack-wait-threshold, which is the 
maximum time to wait for an acknowledge before initiating failure detection.
+3.  I AM ALIVE - notification to the system in response to failure detection 
queries, if the process is still alive. A new membership view is sent to all 
members if the suspect process fails to answer with I AM ALIVE.
+4.  SEVERE ALERT- the result of ack-severe-wait-threshold elapsing without 
receiving a reply.
+
+When a member fails suspect processing, its cache is closed and its 
CacheListeners are notified with the afterRegionDestroyed notification. The 
RegionEvent passed with this notification has a CACHE\_CLOSED operation and a 
FORCED\_DISCONNECT operation, as shown in the FORCED\_DISCONNECT example.
+
+``` pre
+public static final Operation FORCED_DISCONNECT 
+= new Operation("FORCED_DISCONNECT",
+        true, // isLocal
+        true, // isRegion
+        OP_TYPE_DESTROY,
+        OP_DETAILS_NONE
+        );
+            
+```
+
+A cache closes due to being expelled from the distributed system by other 
members. Typically, this happens when a member becomes unresponsive and does 
not respond to heartbeat requests within the member-timeout period, or when 
ack-severe-alert-threshold has expired without a response from the member.
+
+**Note:**
+This is marked as a region operation.
+
+Other members see the normal membership notifications for the departing 
member. For instance, RegionMembershipListeners receive the 
afterRemoteRegionCrashed notification, and SystemMembershipListeners receive 
the memberCrashed notification.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/slow_receivers_preventing_problems.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/slow_receivers_preventing_problems.html.md.erb
 
b/geode-docs/managing/monitor_tune/slow_receivers_preventing_problems.html.md.erb
new file mode 100644
index 0000000..1a6d256
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/slow_receivers_preventing_problems.html.md.erb
@@ -0,0 +1,28 @@
+---
+title:  Preventing Slow Receivers
+---
+
+During system integration, you can identify and eliminate potential causes of 
slow receivers in peer-to-peer communication.
+
+Work with your network administrator to eliminate any problems you identify.
+
+Slowing is more likely to occur when applications run many threads, send large 
messages (due to large entry values), or have a mix of region configurations. 
The problem can also arise from message delivery retries caused by intermittent 
connection problems.
+
+**Host Resources**
+
+Make sure that the machines that run Geode members have enough CPU available 
to them. Do not run any other heavyweight processes on the same machine.
+
+The machines that host Geode application and cache server processes should 
have comparable computing power and memory capacity. Otherwise, members on the 
less powerful machines tend to have trouble keeping up with the rest of the 
group.
+
+**Network Capacity**
+
+Eliminate congested areas on the network by rebalancing the traffic load. Work 
with your network administrator to identify and eliminate traffic bottlenecks, 
whether caused by the architecture of the distributed Geode system or by 
contention between the Geode traffic and other traffic on your network. 
Consider whether more subnets are needed to separate the Geode administrative 
traffic from Geode data transport and to separate all the Geode traffic from 
the rest of your network load.
+
+The network connections between hosts need to have equal bandwidth. If not, 
you can end up with a configuration like the multicast example in the following 
figure, which creates conflicts among the members. For example, if app1 sends 
out data at 7Mbps, app3 and app4 would be fine, but app2 would miss some data. 
In that case, app2 contacts app1 on the TCP channel and sends a log message 
that it’s dropping data.
+<img src="../../images_svg/unbalanced_network_capacity_probs.svg" 
id="slow_recv__image_F8C424AB97C444298993294000676150" class="image" />
+
+**Plan for Growth**
+
+Upgrade the infrastructure to the level required for acceptable performance. 
Analyze the expected Geode traffic in comparison to the network’s capacity. 
Build in extra capacity for growth and high-traffic spikes. Similarly, evaluate 
whether the machines that host Geode application and cache server processes can 
handle the expected load.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/socket_communication.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/socket_communication.html.md.erb 
b/geode-docs/managing/monitor_tune/socket_communication.html.md.erb
new file mode 100644
index 0000000..99d2117
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/socket_communication.html.md.erb
@@ -0,0 +1,31 @@
+---
+title:  Socket Communication
+---
+
+Geode processes communicate using TCP/IP and UDP unicast and multicast 
protocols. In all cases, communication uses sockets that you can tune to 
optimize performance.
+
+The adjustments you make to tune your Geode communication may run up against 
operating system limits. If this happens, check with your system administrator 
about adjusting the operating system settings.
+
+All of the settings discussed here are listed as `gemfire.properties` and 
`cache.xml` settings. They can also be configured through the API and some can 
be configured at the command line. Before you begin, you should understand 
Geode [Basic Configuration and Programming](../../basic_config/book_intro.html).
+
+-   **[Setting Socket Buffer 
Sizes](../../managing/monitor_tune/socket_communication_setting_socket_buffer_sizes.html)**
+
+    When you determine buffer size settings, you try to strike a balance 
between communication needs and other processing.
+
+-   **[Ephemeral TCP Port 
Limits](../../managing/monitor_tune/socket_communication_ephemeral_tcp_port_limits.html)**
+
+    By default, Windows’ ephemeral ports are within the range 1024-4999, 
inclusive.You can increase the range.
+
+-   **[Making Sure You Have Enough 
Sockets](../../managing/monitor_tune/socket_communication_have_enough_sockets.html)**
+
+    The number of sockets available to your applications is governed by 
operating system limits.
+
+-   **[TCP/IP KeepAlive 
Configuration](../../managing/monitor_tune/socket_tcp_keepalive.html)**
+
+    Geode supports TCP KeepAlive to prevent socket connections from being 
timed out.
+
+-   **[TCP/IP Peer-to-Peer Handshake 
Timeouts](../../managing/monitor_tune/socket_communication_tcpip_p2p_handshake_timeouts.html)**
+
+    You can alleviate connection handshake timeouts for TCP/IP connections by 
increasing the connection handshake timeout interval with the system property 
p2p.handshakeTimeoutMs.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/socket_communication_ephemeral_tcp_port_limits.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/socket_communication_ephemeral_tcp_port_limits.html.md.erb
 
b/geode-docs/managing/monitor_tune/socket_communication_ephemeral_tcp_port_limits.html.md.erb
new file mode 100644
index 0000000..e0dc158
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/socket_communication_ephemeral_tcp_port_limits.html.md.erb
@@ -0,0 +1,41 @@
+---
+title:  Ephemeral TCP Port Limits
+---
+
+By default, Windows’ ephemeral ports are within the range 1024-4999, 
inclusive.You can increase the range.
+
+<a id="socket_comm__section_F535D5D99206498DBBD5A6CC3230F25B"></a>
+If you are repeatedly receiving the following exception:
+
+``` pre
+java.net.BindException: Address already in use: connect
+```
+
+and if your system is experiencing a high degree of network activity, such as 
numerous short-lived client connections, this could be related to a limit on 
the number of ephemeral TCP ports. While this issue could occur with other 
operating systems, typically, it is only seen with Windows due to a low default 
limit.
+
+Perform this procedure to increase the limit:
+
+1.  Open the Windows Registry Editor.
+2.  Navigate to the following key:
+
+    ``` pre
+    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
+    ```
+
+3.  From the Edit menu, click New, and then add the following registry entry:
+
+    ``` pre
+    Value Name: MaxUserPort 
+    Value Type: DWORD  
+    Value data: 36863
+    ```
+
+4.  Exit the Registry Editor, and then restart the computer.
+
+This affects all versions of the Windows operating system.
+
+**Note for UDP on Unix Systems**
+
+Unix systems have a default maximum socket buffer size for receiving UDP 
multicast and unicast transmissions that is lower than the default settings for 
mcast-recv-buffer-size and udp-recv-buffer-size. To achieve high-volume 
multicast messaging, you should increase the maximum Unix buffer size to at 
least one megabyte.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/socket_communication_have_enough_sockets.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/socket_communication_have_enough_sockets.html.md.erb
 
b/geode-docs/managing/monitor_tune/socket_communication_have_enough_sockets.html.md.erb
new file mode 100644
index 0000000..fcc2416
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/socket_communication_have_enough_sockets.html.md.erb
@@ -0,0 +1,168 @@
+---
+title:  Making Sure You Have Enough Sockets
+---
+
+The number of sockets available to your applications is governed by operating 
system limits.
+
+Sockets use file descriptors and the operating system’s view of your 
application’s socket use is expressed in terms of file descriptors. There are 
two limits, one on the maximum descriptors available to a single application 
and the other on the total number of descriptors available in the system. If 
you get error messages telling you that you have too many files open, you might 
be hitting the operating system limits with your use of sockets. Your system 
administrator might be able to increase the system limits so that you have more 
available. You can also tune your members to use fewer sockets for their 
outgoing connections. This section discusses socket use in Geode and ways to 
limit socket consumption in your Geode members.
+
+## <a id="socket_comm__section_31B4EFAD6F384AB1BEBCF148D3DEA514" 
class="no-quick-link"></a>Socket Sharing
+
+You can configure socket sharing for peer-to-peer and client-to-server 
connections:
+
+-   **Peer-to-peer**. You can configure whether your members share sockets 
both at the application level and at the thread level. To enable sharing at the 
application level, set the `gemfire.properties` `conserve-sockets` to true. To 
achieve maximum throughput, however, we recommend that you set 
`conserve-sockets` to `false`.
+
+    At the thread level, developers can override this setting by using the 
DistributedSystem API method `setThreadsSocketPolicy`. You might want to enable 
socket sharing at the application level and then have threads that do a lot of 
cache work take sole ownership of their sockets. Make sure to program these 
threads to release their sockets as soon as possible using the 
`releaseThreadsSockets` method, rather than waiting for a timeout or thread 
death.
+
+-   **Client**. You can configure whether your clients share their socket 
connections to servers with the pool setting `thread-local-connections`. There 
is no thread override for this setting. All threads either have their own 
socket or they all share.
+
+## <a id="socket_comm__section_6189D4E5E14F47E7882354603FBCE471" 
class="no-quick-link"></a>Socket Lease Time
+
+You can force the release of an idle socket connection for peer-to-peer and 
client-to-server connections:
+
+-   **Peer-to-peer**. For peer-to-peer threads that do not share sockets, you 
can use the `socket-lease-time` to make sure that no socket sits idle for too 
long. When a socket that belongs to an individual thread remains unused for 
this time period, the system automatically returns it to the pool. The next 
time the thread needs a socket, it creates a new socket.
+-   **Client**. For client connections, you can affect the same lease-time 
behavior by setting the pool `idle-timeout`.
+
+## <a id="socket_comm__section_936C6562C0034A2EAC9A63FFE9FDAC36" 
class="no-quick-link"></a>Calculating Connection Requirements
+
+Each type of member has its own connection requirements. Clients need 
connections to their servers, peers need connections to peers, and so on. Many 
members have compound roles. Use these guidelines to figure each member’s 
socket needs and to calculate the combined needs of members that run on a 
single host system.
+
+A member’s socket use is governed by a number of factors, including:
+
+-   How many peer members it connects to
+-   How many threads it has that update the cache and whether the threads 
share sockets
+-   Whether it is a server or a client,
+-   How many connections come in from other processes
+
+The socket requirements described here are worst-case. Generally, it is not 
practical to calculate exact socket use for your applications. Socket use 
varies depending a number of factors including how many members are running, 
what their threads are doing, and whether threads share sockets.
+
+To calculate any member’s socket requirements, add up the requirements for 
every category that applies to the member. For example, a cache server running 
in a distributed system with clients connected to it has both peer-to-peer and 
server socket requirements.
+
+## <a id="socket_comm__section_DF64BDE7B6AA47A9B08E0540CAD6DA3A" 
class="no-quick-link"></a>Peer-to-Peer Socket Requirements Per Member
+
+Every member of a distributed system maintains two outgoing and two incoming 
connections to every peer. If threads share sockets, these fixed sockets are 
the sockets they share.
+
+For every thread that does not share sockets, additional sockets, one in and 
one out, are added for each peer. This affects not only the member’s socket 
count, but the socket count for every member the member thread connects to.
+
+In this table:
+
+-   M is the total number of members in the distributed system.
+-   T is the number of threads in a member that own their own sockets and do 
not share.
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Peer Member Socket Description</th>
+<th>Number Used</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p>Membership failure detection</p></td>
+<td>2</td>
+</tr>
+<tr class="even">
+<td><p>Listener for incoming peer connections (server P2P)</p></td>
+<td><p>1</p></td>
+</tr>
+<tr class="odd">
+<td><p>Shared sockets (2 in and 2 out)</p>
+<p>Threads that share sockets use these.</p></td>
+<td><p>4 * (M-1)</p></td>
+</tr>
+<tr class="even">
+<td>This member’s thread-owned sockets (1 in and 1 out for each thread, for 
each peer member).</td>
+<td><p>(T * 2) * (M-1)</p></td>
+</tr>
+<tr class="odd">
+<td><p>Other member’s thread-owned sockets that connect to this member (1 in 
and 1 out for each). Note that this might include server threads if any of the 
other members are servers (see Server).</p></td>
+<td><p>Summation over (M-1) other members of (T*2)</p></td>
+</tr>
+</tbody>
+</table>
+
+**Note:**
+The threads servicing client requests add to the total count of thread-owned 
sockets both for this member connecting to its peers and for peers that connect 
to this member.
+
+## <a id="socket_comm__section_0497E07414CC4E0B968B4F3A7AFD3690" 
class="no-quick-link"></a>Server Socket Requirements Per Server
+
+Servers use one connection for each incoming client connection. By default, 
each connection is serviced by a server thread. These threads that service 
client requests communicate with the rest of the server distributed system to 
satisfy the requests and distributed update operations. Each of these threads 
uses its own thread-owned sockets for peer-to-peer communication. So this adds 
to the server’s group of thread-owned sockets.
+
+The thread and connection count in the server may be limited by server 
configuration settings. These are max-connections and max-threads settings in 
the &lt;cache-server&gt; element of the `cache.xml`. These settings limit the 
number of connections the server accepts and the maximum number of threads that 
can service client requests. Both of these limit the server's overall 
connection requirements:
+
+-   When the connection limit is reached, the server refuses additional 
connections. This limits the number of connections the server uses for clients.
+-   When the thread limit is reached, threads start servicing multiple 
connections. This does not limit the number of client connections, but does 
limit the number of peer connections required to service client requests. Each 
server thread used for clients uses its own sockets, so it requires 2 
connections to each of the server’s peers. The max-threads setting puts a cap 
on the number of this type of peer connection that your server needs.
+
+The server uses one socket for each incoming client pool connection. If client 
subscriptions are used, the server creates an additional connection to each 
client that enables subscriptions.
+
+In this table, M is the total number of members in the distributed system.
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Server Socket Description</th>
+<th>Number Used</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Listener for incoming client connections</td>
+<td><p>1</p></td>
+</tr>
+<tr class="even">
+<td>Client pool connections to server</td>
+<td>Number of pool connections to this server</td>
+</tr>
+<tr class="odd">
+<td><p>Threads servicing client requests (the lesser of the client pool 
connection count and the server’s max-threads setting). These connections are 
to the server’s peers.</p></td>
+<td><p>(2 * number of threads in a server that service client pool 
connections)</p>
+<p>* (M-1)</p>
+<p>These threads do not share sockets.</p></td>
+</tr>
+<tr class="even">
+<td>Subscription connections</td>
+<td><p>2 * number of client subscription connections to this server</p></td>
+</tr>
+</tbody>
+</table>
+
+With client/server installations, the number of client connections to any 
single server is undetermined, but Geode’s server load balancing and 
conditioning keeps the connections fairly evenly distributed among servers.
+
+Servers are peers in their own distributed system and have the additional 
socket requirements as noted in the Peer-to-Peer section above.
+
+## <a id="socket_comm__section_0D46E55422D24BA1B0CD888E14FD5182" 
class="no-quick-link"></a>Client Socket Requirements per Client
+
+Client connection requirements are compounded by how many pools they use. The 
use varies according to runtime client connection needs, but will usually have 
maximum and minimum settings. Look for the &lt;pool&gt; element in the 
`cache.xml` for the configuration properties.
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Client Socket Description</th>
+<th>Number Used</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p>Pool connection</p></td>
+<td><p>summation over the client pools of max-connections</p></td>
+</tr>
+<tr class="even">
+<td><p>Subscription connections</p></td>
+<td><p>2 * summation over the client pools of subscription-enabled</p></td>
+</tr>
+</tbody>
+</table>
+
+If your client acts as a peer in its own distributed system, it has the 
additional socket requirements as noted in the Peer-to-Peer section of this 
topic.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/socket_communication_setting_socket_buffer_sizes.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/socket_communication_setting_socket_buffer_sizes.html.md.erb
 
b/geode-docs/managing/monitor_tune/socket_communication_setting_socket_buffer_sizes.html.md.erb
new file mode 100644
index 0000000..8af18c8
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/socket_communication_setting_socket_buffer_sizes.html.md.erb
@@ -0,0 +1,127 @@
+---
+title:  Setting Socket Buffer Sizes
+---
+
+When you determine buffer size settings, you try to strike a balance between 
communication needs and other processing.
+
+Larger socket buffers allow your members to distribute data and events more 
quickly, but they also take memory away from other things. If you store very 
large data objects in your cache, finding the right sizing for your buffers 
while leaving enough memory for the cached data can become critical to system 
performance.
+
+Ideally, you should have buffers large enough for the distribution of any 
single data object so you don’t get message fragmentation, which lowers 
performance. Your buffers should be at least as large as your largest stored 
objects and their keys plus some overhead for message headers. The overhead 
varies depending on the who is sending and receiving, but 100 bytes should be 
sufficient. You can also look at the statistics for the communication between 
your processes to see how many bytes are being sent and received.
+
+If you see performance problems and logging messages indicating blocked 
writers, increasing your buffer sizes may help.
+
+This table lists the settings for the various member relationships and 
protocols, and tells where to set them.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="34%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Protocol / Area Affected</th>
+<th>Configuration Location</th>
+<th>Property Name</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><strong>TCP / IP</strong></td>
+<td>---</td>
+<td>---</td>
+</tr>
+<tr class="even">
+<td>Peer-to-peer send/receive</td>
+<td><p>gemfire.properties</p></td>
+<td>socket-buffer-size</td>
+</tr>
+<tr class="odd">
+<td>Client send/receive</td>
+<td><p>cache.xml &lt;pool&gt;</p></td>
+<td>socket-buffer-size</td>
+</tr>
+<tr class="even">
+<td>Server send/receive</td>
+<td><code class="ph codeph">gfsh start server</code> or
+<p>cache.xml &lt;CacheServer&gt;</p></td>
+<td>socket-buffer-size</td>
+</tr>
+<tr class="odd">
+<td><strong>UDP Multicast</strong></td>
+<td>---</td>
+<td>---</td>
+</tr>
+<tr class="even">
+<td>Peer-to-peer send</td>
+<td>gemfire.properties</td>
+<td>mcast-send-buffer-size</td>
+</tr>
+<tr class="odd">
+<td>Peer-to-peer receive</td>
+<td>gemfire.properties</td>
+<td>mcast-recv-buffer-size</td>
+</tr>
+<tr class="even">
+<td><strong>UDP Unicast</strong></td>
+<td>---</td>
+<td>---</td>
+</tr>
+<tr class="odd">
+<td>Peer-to-peer send</td>
+<td>gemfire.properties</td>
+<td>udp-send-buffer-size</td>
+</tr>
+<tr class="even">
+<td>Peer-to-peer receive</td>
+<td>gemfire.properties</td>
+<td>udp-recv-buffer-size</td>
+</tr>
+</tbody>
+</table>
+
+**TCP/IP Buffer Sizes**
+
+If possible, your TCP/IP buffer size settings should match across your Geode 
installation. At a minimum, follow the guidelines listed here.
+
+-   **Peer-to-peer**. The socket-buffer-size setting in `gemfire.properties` 
should be the same throughout your distributed system.
+-   **Client/server**. The client’s pool socket-buffer size-should match the 
setting for the servers the pool uses, as in these example `cache.xml` snippets:
+
+    ``` pre
+    Client Socket Buffer Size cache.xml Configuration:
+    <pool>name="PoolA" server-group="dataSetA" socket-buffer-size="42000"...
+
+    Server Socket Buffer Size cache.xml Configuration:
+    <cache-server port="40404" socket-buffer-size="42000">
+        <group>dataSetA</group>
+    </cache-server>
+    ```
+
+**UDP Multicast and Unicast Buffer Sizes**
+
+With UDP communication, one receiver can have many senders sending to it at 
once. To accommodate all of the transmissions, the receiving buffer should be 
larger than the sum of the sending buffers. If you have a system with at most 
five members running at any time, in which all members update their data 
regions, you would set the receiving buffer to at least five times the size of 
the sending buffer. If you have a system with producer and consumer members, 
where only two producer members ever run at once, the receiving buffer sizes 
should be set at over two times the sending buffer sizes, as shown in this 
example:
+
+``` pre
+mcast-send-buffer-size=42000
+mcast-recv-buffer-size=90000
+udp-send-buffer-size=42000
+udp-recv-buffer-size=90000
+```
+
+**Operating System Limits**
+
+Your operating system sets limits on the buffer sizes it allows. If you 
request a size larger than the allowed, you may get warnings or exceptions 
about the setting during startup. These are two examples of the type of message 
you may see:
+
+``` pre
+[warning 2008/06/24 16:32:20.286 PDT CacheRunner <main> tid=0x1]
+requested multicast send buffer size of 9999999 but got 262144: see 
+system administration guide for how to adjust your OS 
+
+Exception in thread "main" java.lang.IllegalArgumentException: Could not 
+set "socket-buffer-size" to "99262144" because its value can not be 
+greater than "20000000".
+```
+
+If you think you are requesting more space for your buffer sizes than your 
system allows, check with your system administrator about adjusting the 
operating system limits.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/socket_communication_tcpip_p2p_handshake_timeouts.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/socket_communication_tcpip_p2p_handshake_timeouts.html.md.erb
 
b/geode-docs/managing/monitor_tune/socket_communication_tcpip_p2p_handshake_timeouts.html.md.erb
new file mode 100644
index 0000000..e33dd9c
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/socket_communication_tcpip_p2p_handshake_timeouts.html.md.erb
@@ -0,0 +1,21 @@
+---
+title:  TCP/IP Peer-to-Peer Handshake Timeouts
+---
+
+You can alleviate connection handshake timeouts for TCP/IP connections by 
increasing the connection handshake timeout interval with the system property 
p2p.handshakeTimeoutMs.
+
+The default setting is 59000 milliseconds.
+
+This sets the handshake timeout to 75000 milliseconds for a Java application:
+
+``` pre
+-Dp2p.handshakeTimeoutMs=75000
+```
+
+The properties are passed to the cache server on the `gfsh` command line:
+
+``` pre
+gfsh>start server --name=server_name --J=-Dp2p.handshakeTimeoutMs=75000
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/socket_tcp_keepalive.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/socket_tcp_keepalive.html.md.erb 
b/geode-docs/managing/monitor_tune/socket_tcp_keepalive.html.md.erb
new file mode 100644
index 0000000..b960fd4
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/socket_tcp_keepalive.html.md.erb
@@ -0,0 +1,14 @@
+---
+title:  TCP/IP KeepAlive Configuration
+---
+
+Geode supports TCP KeepAlive to prevent socket connections from being timed 
out.
+
+The `gemfire.enableTcpKeepAlive` system property prevents connections that 
appear idle from being timed out (for example, by a firewall.) When configured 
to true, Geode enables the SO\_KEEPALIVE option for individual sockets. This 
operating system-level setting allows the socket to send verification checks 
(ACK requests) to remote systems in order to determine whether or not to keep 
the socket connection alive.
+
+**Note:**
+The time intervals for sending the first ACK KeepAlive request, the subsequent 
ACK requests and the number of requests to send before closing the socket is 
configured on the operating system level.
+
+By default, this system property is set to true.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/sockets_and_gateways.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/monitor_tune/sockets_and_gateways.html.md.erb 
b/geode-docs/managing/monitor_tune/sockets_and_gateways.html.md.erb
new file mode 100644
index 0000000..4910453
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/sockets_and_gateways.html.md.erb
@@ -0,0 +1,105 @@
+---
+title:  Configuring Sockets in Multi-Site (WAN) Deployments
+---
+
+When you determine buffer size settings, you try to strike a balance between 
communication needs and other processing.
+
+This table lists the settings for gateway relationships and protocols, and 
tells where to set them.
+
+<table>
+<colgroup>
+<col width="33%" />
+<col width="33%" />
+<col width="33%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Protocol / Area Affected</th>
+<th>Configuration Location</th>
+<th>Property Name</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><strong>TCP / IP</strong></td>
+<td>---</td>
+<td>---</td>
+</tr>
+<tr class="even">
+<td>Gateway sender</td>
+<td><code class="ph codeph">gfsh create gateway-sender</code> or
+<p>cache.xml &lt;gateway-sender&gt;</p></td>
+<td>socket-buffer-size</td>
+</tr>
+<tr class="odd">
+<td>Gateway receiver</td>
+<td><code class="ph codeph">gfsh create gateway-receiver</code> or cache.xml 
&lt;gateway-receiver&gt;</td>
+<td>socket-buffer-size</td>
+</tr>
+</tbody>
+</table>
+
+**TCP/IP Buffer Sizes**
+
+If possible, your TCP/IP buffer size settings should match across your GemFire 
installation. At a minimum, follow the guidelines listed here.
+
+-   **Multisite (WAN)**. In a multi-site installation using gateways, if the 
link between sites is not tuned for optimum throughput, it could cause messages 
to back up in the cache queues. If a receiving queue overflows because of 
inadequate buffer sizes, it will become out of sync with the sender and the 
receiver will be unaware of the condition.
+
+    The gateway sender's socket-buffer-size attribute should match the gateway 
receiver’s socket-buffer-size attribute for all gateway receivers that the 
sender connects to, as in these example `cache.xml` snippets:
+
+    ``` pre
+    Gateway Sender Socket Buffer Size cache.xml Configuration: 
+
+    <gateway-sender id="sender2" parallel="true"
+     remote-distributed-system-id="2"
+     socket-buffer-size="42000"
+     maximum-queue-memory="150"/>
+
+    Gateway Receiver Socket Buffer Size cache.xml Configuration:
+    <gateway-receiver start-port="1530" end-port="1551"
+     socket-buffer-size="42000"/>  
+    ```
+
+**Note:**
+WAN deployments increase the messaging demands on a Geode system. To avoid 
hangs related to WAN messaging, always set `conserve-sockets=false` for GemFire 
members that participate in a WAN deployment.
+
+## <a id="socket_comm__section_4A7C60D4471A4339884AA5AAC97B4DAA" 
class="no-quick-link"></a>Multi-site (WAN) Socket Requirements
+
+Each gateway sender and gateway receiver uses a socket to distribute events or 
to listen for incoming connections from remote sites.
+
+<table>
+<colgroup>
+<col width="50%" />
+<col width="50%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Multi-site Socket Description</th>
+<th>Number Used</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td><p>Listener for incoming connections</p></td>
+<td><p>summation of the number of gateway-receivers defined for the 
member</p></td>
+</tr>
+<tr class="even">
+<td><p>Incoming connection</p></td>
+<td><p>summation of the total number of remote gateway senders configured to 
connect to the gateway receiver</p></td>
+</tr>
+<tr class="odd">
+<td><p>Outgoing connection</p></td>
+<td><p>summation of the number of gateway senders defined for the 
member</p></td>
+</tr>
+</tbody>
+</table>
+
+Servers are peers in their own distributed system and have the additional 
socket requirements as noted in the Peer-to-Peer section above.
+
+## <a id="socket_comm__section_66D11C8E84F941B58800EDB52194B087" 
class="no-quick-link"></a>Member produces SocketTimeoutException
+
+A client, server, gateway sender, or gateway receiver produces a 
SocketTimeoutException when it stops waiting for a response from the other side 
of the connection and closes the socket. This exception typically happens on 
the handshake or when establishing a callback connection.
+
+Response:
+
+Increase the default socket timeout setting for the member. This timeout is 
set separately for the client Pool and for the gateway sender and gateway 
receiver, either in the `cache.xml` file or through the API. For a 
client/server configuration, adjust the "read-timeout" value as described in 
[&lt;pool&gt;](../../reference/topics/client-cache.html#cc-pool) or use the 
`org.apache.geode.cache.client.PoolFactory.setReadTimeout` method. For a 
gateway sender or gateway receiver, see [WAN 
Configuration](../../reference/topics/elements_ref.html#topic_7B1CABCAD056499AA57AF3CFDBF8ABE3).

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/system_member_performance.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/system_member_performance.html.md.erb 
b/geode-docs/managing/monitor_tune/system_member_performance.html.md.erb
new file mode 100644
index 0000000..72cfe8a
--- /dev/null
+++ b/geode-docs/managing/monitor_tune/system_member_performance.html.md.erb
@@ -0,0 +1,25 @@
+---
+title:  System Member Performance
+---
+
+You can modify some configuration parameters to improve system member 
performance.
+
+Before doing so, you should understand [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
+
+-   **[Distributed System Member 
Properties](../../managing/monitor_tune/system_member_performance_distributed_system_member.html)**
+
+    Several performance-related properties apply to a cache server or 
application that connects to the distributed system.
+
+-   **[JVM Memory Settings and System 
Performance](../../managing/monitor_tune/system_member_performance_jvm_mem_settings.html)**
+
+    You configure JVM memory settings for the Java application by adding 
parameters to the java invocation. For the cache server, you add them to the 
command-line parameters for the gfsh `start server` command.
+
+-   **[Garbage Collection and System 
Performance](../../managing/monitor_tune/system_member_performance_garbage.html)**
+
+    If your application exhibits unacceptably high latencies, you might 
improve performance by modifying your JVM’s garbage collection behavior.
+
+-   **[Connection Thread Settings and 
Performance](../../managing/monitor_tune/system_member_performance_connection_thread_settings.html)**
+
+    When many peer processes are started concurrently, you can improve the 
distributed system connect time can by setting the p2p.HANDSHAKE\_POOL\_SIZE 
system property value to the expected number of members.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/system_member_performance_connection_thread_settings.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/system_member_performance_connection_thread_settings.html.md.erb
 
b/geode-docs/managing/monitor_tune/system_member_performance_connection_thread_settings.html.md.erb
new file mode 100644
index 0000000..0c13022
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/system_member_performance_connection_thread_settings.html.md.erb
@@ -0,0 +1,15 @@
+---
+title:  Connection Thread Settings and Performance
+---
+
+When many peer processes are started concurrently, you can improve the 
distributed system connect time can by setting the p2p.HANDSHAKE\_POOL\_SIZE 
system property value to the expected number of members.
+
+This property controls the number of threads that can be used to establish new 
TCP/IP connections between peer caches. The threads are discarded if they are 
idle for 60 seconds.
+
+The default value for p2p.HANDSHAKE\_POOL\_SIZE is 10. This command-line 
specification sets the number of threads to 100:
+
+``` pre
+-Dp2p.HANDSHAKE_POOL_SIZE=100
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/system_member_performance_distributed_system_member.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/system_member_performance_distributed_system_member.html.md.erb
 
b/geode-docs/managing/monitor_tune/system_member_performance_distributed_system_member.html.md.erb
new file mode 100644
index 0000000..6b885b9
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/system_member_performance_distributed_system_member.html.md.erb
@@ -0,0 +1,11 @@
+---
+title:  Distributed System Member Properties
+---
+
+Several performance-related properties apply to a cache server or application 
that connects to the distributed system.
+
+-   **statistic-sampling-enabled**.Turning off statistics sampling saves 
resources, but it also takes away potentially valuable information for ongoing 
system tuning and unexpected system problems. If LRU eviction is configured, 
then statistics sampling must be on.
+-   **statistic-sample-rate**. Increasing the sample rate for statistics 
reduces system resource use while still providing some statistics for system 
tuning and failure analysis.
+-   **log-level**. As with the statistic sample rate, lowering this setting 
reduces system resource consumption. See 
[Logging](../logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/system_member_performance_garbage.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/system_member_performance_garbage.html.md.erb
 
b/geode-docs/managing/monitor_tune/system_member_performance_garbage.html.md.erb
new file mode 100644
index 0000000..b9231ce
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/system_member_performance_garbage.html.md.erb
@@ -0,0 +1,36 @@
+---
+title:  Garbage Collection and System Performance
+---
+
+If your application exhibits unacceptably high latencies, you might improve 
performance by modifying your JVM’s garbage collection behavior.
+
+Garbage collection, while necessary, introduces latency into your system by 
consuming resources that would otherwise be available to your application. You 
can reduce the impact of garbage collection in two ways:
+
+-   Optimize garbage collection in the JVM heap.
+-   Reduce the amount of data exposed to garbage collection by storing values 
in off-heap memory.
+
+**Note:**
+Garbage collection tuning options depend on the JVM you are using. Suggestions 
given here apply to the Sun HotSpot JVM. If you use a different JVM, check with 
your vendor to see if these or comparable options are available to you.
+
+**Note:**
+Modifications to garbage collection sometimes produce unexpected results. 
Always test your system before and after making changes to verify that the 
system’s performance has improved.
+
+**Optimizing Garbage Collection**
+
+The two options suggested here are likely to expedite garbage collecting 
activities by introducing parallelism and by focusing on the data that is most 
likely to be ready for cleanup. The first parameter causes the garbage 
collector to run concurrent to your application processes. The second parameter 
causes it to run multiple, parallel threads for the "young generation" garbage 
collection (that is, garbage collection performed on the most recent objects in 
memory—where the greatest benefits are expected):
+
+``` pre
+-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
+```
+
+For applications, if you are using remote method invocation (RMI) Java APIs, 
you might also be able to reduce latency by disabling explicit calls to the 
garbage collector. The RMI internals automatically invoke garbage collection 
every sixty seconds to ensure that objects introduced by RMI activities are 
cleaned up. Your JVM may be able to handle these additional garbage collection 
needs. If so, your application may run faster with explicit garbage collection 
disabled. You can try adding the following command-line parameter to your 
application invocation and test to see if your garbage collector is able to 
keep up with demand:
+
+``` pre
+-XX:+DisableExplicitGC
+```
+
+**Using Off-heap Memory**
+
+You can improve the performance of some applications by storing data values in 
off-heap memory. Certain objects, such as keys, must remain in the JVM heap. 
See [Managing Off-Heap Memory](../heap_use/off_heap_management.html) for more 
information.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/ccc2fbda/geode-docs/managing/monitor_tune/system_member_performance_jvm_mem_settings.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/monitor_tune/system_member_performance_jvm_mem_settings.html.md.erb
 
b/geode-docs/managing/monitor_tune/system_member_performance_jvm_mem_settings.html.md.erb
new file mode 100644
index 0000000..08d5700
--- /dev/null
+++ 
b/geode-docs/managing/monitor_tune/system_member_performance_jvm_mem_settings.html.md.erb
@@ -0,0 +1,61 @@
+---
+title:  JVM Memory Settings and System Performance
+---
+
+You configure JVM memory settings for the Java application by adding 
parameters to the java invocation. For the cache server, you add them to the 
command-line parameters for the gfsh `start server` command.
+
+-   JVM heap size—Your JVM may require more memory than is allocated by 
default. For example, you may need to increase heap size for an application 
that stores a lot of data. You can set a maximum size and an initial size, so 
if you know you will be using the maximum (or close to it) for the life of the 
member, you can speed memory allocation time by setting the initial size to the 
maximum. This sets both the maximum and initial memory sizes to 1024 megabytes 
for a Java application:
+
+    ``` pre
+    -Xmx1024m -Xms1024m
+    ```
+
+    Properties can be passed to the cache server on the `gfsh` command line:
+
+    ``` pre
+    gfsh>start server --name=server-name --J=-Xmx1024m --J=-Xms1024m
+    ```
+
+-   MaxDirectMemorySize—The JVM has a kind of memory called direct memory, 
which is distinct from normal JVM heap memory, that can run out. You can 
increase the direct buffer memory either by increasing the maximum heap size 
(see previous JVM Heap Size), which increases both the maximum heap and the 
maximum direct memory, or by only increasing the maximum direct memory using 
-XX:MaxDirectMemorySize. The following parameter added to the Java application 
startup increases the maximum direct memory size to 256 megabytes:
+
+    ``` pre
+    -XX:MaxDirectMemorySize=256M
+    ```
+
+    The same effect for the cache server:
+
+    ``` pre
+    gfsh>start server --name=server-name --J=-XX:MaxDirectMemorySize=256M
+    ```
+
+-   JVM stack size—Each thread in a Java application has its own stack. The 
stack is used to hold return addresses, arguments to functions and method 
calls, and so on. Since Geode is a highly multi-threaded system, at any given 
point in time there are multiple thread pools and threads that are in use. The 
default stack size setting for a thread in Java is 1MB. Stack size has to be 
allocated in contiguous blocks and if the machine is being used actively and 
there are many threads running in the system (Task Manager shows the number of 
active threads), you may encounter an `OutOfMemory error: unable to create new 
native thread`, even though your process has enough available heap. If this 
happens, consider reducing the stack size requirement for threads on the cache 
server. The following parameter added to the Java application startup limits 
the maximum size of the stack.
+
+    ``` pre
+    -Xss384k
+    ```
+
+    In particular, we recommend starting the cache servers with a stack size 
of 384k or 512k in such cases. For example:
+
+    ``` pre
+    gfsh>start server --name=server-name --J=-Xss384k
+
+    gfsh>start server --name=server-name --J=-Xss512k
+    ```
+
+-   Off-heap memory size—For applications that use off-heap memory, 
specifies how much off-heap memory to allocate. Setting `off-heap-memory-size` 
is prerequisite to enabling the off-heap capability for individual regions. For 
example:
+
+    ``` pre
+    gfsh>start server --name=server-name --off-heap-memory-size=200G
+    ```
+
+    See [Using Off-heap 
Memory](../heap_use/off_heap_management.html#managing-off-heap-memory) for 
additional considerations regarding this parameter.
+
+-   Lock memory—On Linux systems, you can prevent heap and off-heap memory 
from being paged out by setting the `lock-memory` parameter to `true`. For 
example:
+
+    ``` pre
+    gfsh>start server --name=server-name --off-heap-memory-size=200G 
--lock-memory=true
+    ```
+
+    See [Locking Memory](../heap_use/lock_memory.html) for additional 
considerations regarding this parameter.
+
+

Reply via email to