davebarnes97 commented on a change in pull request #6134:
URL: https://github.com/apache/geode/pull/6134#discussion_r594702368
##########
File path: geode-docs/getting_started/upgrade/upgrade_offline.html.md.erb
##########
@@ -0,0 +1,92 @@
+---
+title: Offline Upgrade
+---
+
+Use the offline upgrade procedure when you cannot, or choose not to, perform a
rolling upgrade.
+For example, a rolling upgrade is not possible for a cluster that has
partitioned regions without redundancy.
+(Without the redundancy, region entries would be lost when individual servers
were taken out of the cluster during a rolling upgrade.)
+
+## <a id="offline-upgrade-guidelines" class="no-quick-link"></a>Offline
Upgrade Guidelines
+
+**Versions**
+
+For best reliability and performance, all server components of a
<%=vars.product_name%> system should run the same version of the software.
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
for more details on how different versions of GemFire can interoperate.
+
+**Data member interdependencies**
+
+When you restart your upgraded servers, interdependent data members may hang
on startup waiting for each other. In this case, start the servers in
+separate command shells so they can start simultaneously and communicate with
one another to resolve dependencies.
+
+## <a id="offline-upgrade-procedure" class="no-quick-link"></a>Offline Upgrade
Procedure
+
+1. Stop any connected clients.
+
+1. On a machine hosting a locator, open a terminal console.
+
+1. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to a currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+1. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+1. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+1. Shut down the entire cluster (by pressing Y at the prompt, this will lose
no persisted data):
+
+ ``` pre
+ gfsh>shutdown --include-locators=true
+ As a lot of data in memory will be lost, including possibly events in
queues, do you really want to shutdown the entire distributed system? (Y/n): y
+ Shutdown is triggered
+
+ gfsh>
+ No longer connected to 172.16.71.1[1099].
+ gfsh>quit
+ ```
+
+ Since GemFire is a Java process, to check before continuing that all
GemFire members successfully stopped,
+it is useful to use the JDK-included `jps` command to check running java
processes:
+
+ ``` pre
+ % jps
+ 29664 Jps
+ ```
+
+1. On each machine in the cluster, install the new version of the software
(alongside the older version of the software).
+
+1. Redeploy your environment's configuration files to the new version
installation. If you are using the cluster configuration service, one copy of
the exported `.zip` configuration file is sufficient, as the first upgraded
locator will propagate it to the other members.
+For XML configurations, you should have a copy of the saved configuration
files for each data member.
+
+1. On each machine in the cluster, install any updated server code. Point all
client applications to the new installation of GemFire.
Review comment:
Thank you
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
Review comment:
Thank you
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
Review comment:
Thank you
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
Review comment:
Thank you
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]