alb3rtobr commented on a change in pull request #6134:
URL: https://github.com/apache/geode/pull/6134#discussion_r593501756
##########
File path: geode-docs/getting_started/upgrade/upgrade_offline.html.md.erb
##########
@@ -0,0 +1,92 @@
+---
+title: Offline Upgrade
+---
+
+Use the offline upgrade procedure when you cannot, or choose not to, perform a
rolling upgrade.
+For example, a rolling upgrade is not possible for a cluster that has
partitioned regions without redundancy.
+(Without the redundancy, region entries would be lost when individual servers
were taken out of the cluster during a rolling upgrade.)
+
+## <a id="offline-upgrade-guidelines" class="no-quick-link"></a>Offline
Upgrade Guidelines
+
+**Versions**
+
+For best reliability and performance, all server components of a
<%=vars.product_name%> system should run the same version of the software.
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
for more details on how different versions of GemFire can interoperate.
+
+**Data member interdependencies**
+
+When you restart your upgraded servers, interdependent data members may hang
on startup waiting for each other. In this case, start the servers in
+separate command shells so they can start simultaneously and communicate with
one another to resolve dependencies.
+
+## <a id="offline-upgrade-procedure" class="no-quick-link"></a>Offline Upgrade
Procedure
+
+1. Stop any connected clients.
+
+1. On a machine hosting a locator, open a terminal console.
+
+1. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to a currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+1. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+1. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+1. Shut down the entire cluster (by pressing Y at the prompt, this will lose
no persisted data):
+
+ ``` pre
+ gfsh>shutdown --include-locators=true
+ As a lot of data in memory will be lost, including possibly events in
queues, do you really want to shutdown the entire distributed system? (Y/n): y
+ Shutdown is triggered
+
+ gfsh>
+ No longer connected to 172.16.71.1[1099].
+ gfsh>quit
+ ```
+
+ Since GemFire is a Java process, to check before continuing that all
GemFire members successfully stopped,
+it is useful to use the JDK-included `jps` command to check running java
processes:
+
+ ``` pre
+ % jps
+ 29664 Jps
+ ```
+
+1. On each machine in the cluster, install the new version of the software
(alongside the older version of the software).
+
+1. Redeploy your environment's configuration files to the new version
installation. If you are using the cluster configuration service, one copy of
the exported `.zip` configuration file is sufficient, as the first upgraded
locator will propagate it to the other members.
+For XML configurations, you should have a copy of the saved configuration
files for each data member.
+
+1. On each machine in the cluster, install any updated server code. Point all
client applications to the new installation of GemFire.
Review comment:
Another "GemFire" here
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
Review comment:
There are three appearances of "GemFire". Also, change `GFE 9.0.0` by a
Geode version string. I used `GEODE 1.2.0`.
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_offline.html.md.erb
##########
@@ -0,0 +1,92 @@
+---
+title: Offline Upgrade
+---
+
+Use the offline upgrade procedure when you cannot, or choose not to, perform a
rolling upgrade.
+For example, a rolling upgrade is not possible for a cluster that has
partitioned regions without redundancy.
+(Without the redundancy, region entries would be lost when individual servers
were taken out of the cluster during a rolling upgrade.)
+
+## <a id="offline-upgrade-guidelines" class="no-quick-link"></a>Offline
Upgrade Guidelines
+
+**Versions**
+
+For best reliability and performance, all server components of a
<%=vars.product_name%> system should run the same version of the software.
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
for more details on how different versions of GemFire can interoperate.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_overview.html.md.erb
##########
@@ -0,0 +1,38 @@
+<% set_title("Upgrading", product_name_long) %>
Review comment:
Apache License has to be added
##########
File path: geode-docs/getting_started/upgrade/upgrade_offline.html.md.erb
##########
@@ -0,0 +1,92 @@
+---
+title: Offline Upgrade
+---
+
+Use the offline upgrade procedure when you cannot, or choose not to, perform a
rolling upgrade.
+For example, a rolling upgrade is not possible for a cluster that has
partitioned regions without redundancy.
+(Without the redundancy, region entries would be lost when individual servers
were taken out of the cluster during a rolling upgrade.)
+
+## <a id="offline-upgrade-guidelines" class="no-quick-link"></a>Offline
Upgrade Guidelines
+
+**Versions**
+
+For best reliability and performance, all server components of a
<%=vars.product_name%> system should run the same version of the software.
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
for more details on how different versions of GemFire can interoperate.
+
+**Data member interdependencies**
+
+When you restart your upgraded servers, interdependent data members may hang
on startup waiting for each other. In this case, start the servers in
+separate command shells so they can start simultaneously and communicate with
one another to resolve dependencies.
+
+## <a id="offline-upgrade-procedure" class="no-quick-link"></a>Offline Upgrade
Procedure
+
+1. Stop any connected clients.
+
+1. On a machine hosting a locator, open a terminal console.
+
+1. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to a currently running locator.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_planning.html.md.erb
##########
@@ -0,0 +1,74 @@
+---
Review comment:
Apache License has to be added
##########
File path: geode-docs/getting_started/upgrade/upgrade_offline.html.md.erb
##########
@@ -0,0 +1,92 @@
+---
+title: Offline Upgrade
+---
+
+Use the offline upgrade procedure when you cannot, or choose not to, perform a
rolling upgrade.
+For example, a rolling upgrade is not possible for a cluster that has
partitioned regions without redundancy.
+(Without the redundancy, region entries would be lost when individual servers
were taken out of the cluster during a rolling upgrade.)
+
+## <a id="offline-upgrade-guidelines" class="no-quick-link"></a>Offline
Upgrade Guidelines
+
+**Versions**
+
+For best reliability and performance, all server components of a
<%=vars.product_name%> system should run the same version of the software.
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
for more details on how different versions of GemFire can interoperate.
+
+**Data member interdependencies**
+
+When you restart your upgraded servers, interdependent data members may hang
on startup waiting for each other. In this case, start the servers in
+separate command shells so they can start simultaneously and communicate with
one another to resolve dependencies.
+
+## <a id="offline-upgrade-procedure" class="no-quick-link"></a>Offline Upgrade
Procedure
+
+1. Stop any connected clients.
+
+1. On a machine hosting a locator, open a terminal console.
+
+1. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to a currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+1. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+1. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+1. Shut down the entire cluster (by pressing Y at the prompt, this will lose
no persisted data):
+
+ ``` pre
+ gfsh>shutdown --include-locators=true
+ As a lot of data in memory will be lost, including possibly events in
queues, do you really want to shutdown the entire distributed system? (Y/n): y
+ Shutdown is triggered
+
+ gfsh>
+ No longer connected to 172.16.71.1[1099].
+ gfsh>quit
+ ```
+
+ Since GemFire is a Java process, to check before continuing that all
GemFire members successfully stopped,
Review comment:
There are two appearances of "GemFire" in this line
##########
File path: geode-book/config.yml
##########
@@ -28,8 +28,9 @@ template_variables:
product_name_long: Apache Geode
product_name: Geode
product_name_lowercase: geode
- product_version: '1.15.0'
+ product_version: '1.15'
product_version_nodot: '115'
+ product_version_old_minor: '1.14'
Review comment:
If this variable is intended to be the previous minor version of the
current product version, I think it will be more intuitive call it
`product_vesion_previous_minor`.
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_clients.html.md.erb
##########
@@ -0,0 +1,21 @@
+---
Review comment:
Apache License has to be added
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+3. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+4. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+5. Stop the locator. For example:
+
+ ``` pre
+ gfsh>stop locator --name=locator1
+ Stopping Locator running in /Users/username/sandbox/locator on
172.16.71.1[10334] as locator...
+ Process ID: 96686
+ Log File: /Users/username/sandbox/locator/locator.log
+ ....
+ No longer connected to 172.16.71.1[1099].
+ ```
+6. Start `gfsh` from the new GemFire installation.
+ Verify that you are running the newer version with
+
+ ``` pre
+ gfsh>version
+ ```
+
+7. Start a locator and import the saved configuration. If you are using the
cluster configuration service, use the same name and directory as the older
version you stopped, and the new locator will access the old locator's cluster
configuration without having to import it in a separate step:
+
+ ```
+ gfsh>start locator --name=locator1 --enable-cluster-configuration=true
--dir=/data/locator1
+ ```
+
+ Otherwise, use the gfsh `import cluster-configuration` command or
explicitly import `.xml` and `.properties` files, as appropriate.
+
+8. The new locator should reconnect to the same members as the older locator.
Use `list members` to verify:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 |
172.16.71.1(locator1:26752:locator)<ec><v17>:1024(version:UNKNOWN[ordinal=65])
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+9. Upgrade the remaining locators by stopping and restarting them. When you
have completed that step, the system gives a more coherent view of version
numbers:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026(version:GFE 9.0)
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
+ ```
+
+ The server entries show that the servers are running an older version of
gemfire, in this case `(version:GFE 9.0)`.
+
+### <a id="upgrade-servers" class="no-quick-link"></a>Upgrade Servers
+
+After you have upgraded all of the system's locators, upgrade the servers.
+
+1. Upgrade each server, one at a time, by stopping it and restarting it.
Restart the server with the same command-line options with which it was
originally started in the previous installation. For example:
+
+ ```
+ gfsh>stop server --name=server1
+ Stopping Cache Server running in /Users/share/server1 on
172.16.71.1[52139] as server1...
+
+ gfsh>start server --name=server1 --use-cluster-configuration=true
--server-port=0 --dir=/data/server1
+ Starting a Geode Server in /Users/share/server1...
+ ```
+
+ Use the `list members` command to verify that the server is now running
the new version of GemFire:
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_planning.html.md.erb
##########
@@ -0,0 +1,74 @@
+---
+title: Planning an Upgrade
+---
+
+Before you upgrade your system, back it up. Make backup copies of all
existing disk-stores,
+server-side code, configuration files, and data across the entire cluster. To
get a backup of the
+data that includes the most recent changes may require that traffic across the
cluster is stopped
+before the backup is made.
+The discussion at [Creating Backups for System Recovery and Operational
Management](../../managing/disk_storage/backup_restore_disk_store.html#backup_restore_disk_store)
+explains the process, and the
+[backup disk-store](../../tools_modules/gfsh/command-pages/backup.html)
command reference page describes
+how to use the `gfsh backup disk-store` command to make a backup.
+
+## <a id="guidelines-upgrading" class="no-quick-link"></a>Guidelines for
Upgrading
+
+- Schedule your upgrade during a period of low user activity for your system
and network.
+- **Important:** After all locators have been upgraded, *do not start or
restart any processes* that use the older version of the software. The older
process will either not be allowed to join the distributed system or, if
allowed to join, can potentially cause a deadlock.
+- Verify that all members that you wish to upgrade are members of the same
distributed system cluster.
+A list of cluster members will be output with the `gfsh` command:
+
+ ``` pre
+ gfsh>list members
+ ```
+
+- Locate a copy of your system's startup script, if your site has one (most
do). The startup script can be a handy reference for restarting upgraded
locators and servers with the same `gfsh` command lines that were used in your
current installation.
+
+- Identify how your current cluster configuration was specified. The way in
which your cluster
+ configuration was created determines which commands you use to save and
restore that cluster
+ configuration during the upgrade procedure. There are two possibilites:
+
+ - With `gfsh` commands, relying on the underlying **cluster configuration
service** to record the configuration: see [Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - With **XML properties** specified through the Java API or configuration
files: see [Deploying Configuration Files without the Cluster Configuration
Service](../../configuring/running/deploying_config_files.html).
+
+- Do not modify region attributes or data, either via `gfsh` or `cache.xml`
configuration, during the upgrade process.
+
+- If possible, follow the [Rolling Upgrade](upgrade_rolling.html) procedure.
A multi-site
+installation can also do rolling upgrades within each site. If a rolling
upgrade is not possible,
+follow the [Off-Line Upgrade](upgrade_offline.html) procedure.
+A rolling upgrade is not possible for a cluster that has partitioned regions
without redundancy.
+Without the redundancy, region entries will be lost when individual servers
are taken out of the
+cluster during a rolling upgrade.
+
+## <a id="version_compatibilities" class="no-quick-link"></a>Version
Compatibilities
+
+Your choice of upgrade procedure depends, in part, on the versions of
<%=vars.product_name_long%> involved.
+
+- **Version Compatibility Between Peers and Cache Servers**
+
+ For best reliability and performance, all server components of a
<%=vars.product_name%> system should run the same version of the software.
+ For the purposes of a rolling upgrade, you can have peers or cache servers
running different minor
+ versions of <%=vars.product_name_long%> at the same time, as long as the
major version is the same. For example,
+ some components can continue to run under version
<%=vars.product_version_old_minor%> while you are in the process of upgrading to
+ version <%=vars.product_version%>.
+
+- **Version Compatibility Between Clients and Servers**
+
+ Client/server access is backward compatible. An
<%=vars.product_name_long%> cluster can be accessed by clients using any
previous version. However, clients
+ cannot connect to servers running older versions of
<%=vars.product_name_long%>. For example, a client running
<%=vars.product_name_long%> <%=vars.product_version_old_minor%> can access a
cluster
+ running <%=vars.product_name_long%> <%=vars.product_version%>, but a
client running <%=vars.product_name_long%> <%=vars.product_version%> could not
connect to a cluster running <%=vars.product_name_long%>
<%=vars.product_version_old_minor%>.
+
+- **Version Compatibility Between Sites in Multi-Site (WAN) Deployments**
+
+ In multi-site (WAN) deployments, sites should still be able to communicate
with one another, even if they use different versions.
+
+## <a id="java-notes" class="no-quick-link"></a>Java Notes
+
+- To check your current Java version, type `java -version` at a command-line
prompt.
+
+- <%=vars.product_name_long%> <%=vars.product_version%> requires Java SE 8,
version 92 or a more recent version.
+
+- The <%=vars.product_name_long%> product download does not include Java.
+You must download and install a supported JRE or JDK on each system
+running Geode. To obtain best performance with commands such as `gfsh status`
and `gfsh stop`,
Review comment:
Change "Geode" by `<%=vars.product_name_long%>`
##########
File path: geode-docs/getting_started/upgrade/upgrade_offline.html.md.erb
##########
@@ -0,0 +1,92 @@
+---
Review comment:
Apache License has to be added
##########
File path: geode-docs/getting_started/upgrade/upgrade_overview.html.md.erb
##########
@@ -0,0 +1,38 @@
+<% set_title("Upgrading", product_name_long) %>
Review comment:
This file has a different syntax for setting the title than the others
(this is using `set_title`, while the other just text). Im not sure which
format is the correct.
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
Review comment:
Apache License has to be added
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+3. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+4. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+5. Stop the locator. For example:
+
+ ``` pre
+ gfsh>stop locator --name=locator1
+ Stopping Locator running in /Users/username/sandbox/locator on
172.16.71.1[10334] as locator...
+ Process ID: 96686
+ Log File: /Users/username/sandbox/locator/locator.log
+ ....
+ No longer connected to 172.16.71.1[1099].
+ ```
+6. Start `gfsh` from the new GemFire installation.
+ Verify that you are running the newer version with
+
+ ``` pre
+ gfsh>version
+ ```
+
+7. Start a locator and import the saved configuration. If you are using the
cluster configuration service, use the same name and directory as the older
version you stopped, and the new locator will access the old locator's cluster
configuration without having to import it in a separate step:
+
+ ```
+ gfsh>start locator --name=locator1 --enable-cluster-configuration=true
--dir=/data/locator1
+ ```
+
+ Otherwise, use the gfsh `import cluster-configuration` command or
explicitly import `.xml` and `.properties` files, as appropriate.
+
+8. The new locator should reconnect to the same members as the older locator.
Use `list members` to verify:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 |
172.16.71.1(locator1:26752:locator)<ec><v17>:1024(version:UNKNOWN[ordinal=65])
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+9. Upgrade the remaining locators by stopping and restarting them. When you
have completed that step, the system gives a more coherent view of version
numbers:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026(version:GFE 9.0)
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
+ ```
+
+ The server entries show that the servers are running an older version of
gemfire, in this case `(version:GFE 9.0)`.
+
+### <a id="upgrade-servers" class="no-quick-link"></a>Upgrade Servers
+
+After you have upgraded all of the system's locators, upgrade the servers.
+
+1. Upgrade each server, one at a time, by stopping it and restarting it.
Restart the server with the same command-line options with which it was
originally started in the previous installation. For example:
+
+ ```
+ gfsh>stop server --name=server1
+ Stopping Cache Server running in /Users/share/server1 on
172.16.71.1[52139] as server1...
+
+ gfsh>start server --name=server1 --use-cluster-configuration=true
--server-port=0 --dir=/data/server1
+ Starting a Geode Server in /Users/share/server1...
+ ```
+
+ Use the `list members` command to verify that the server is now running
the new version of GemFire:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26835)<v32>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
+ ```
+
+2. Restore data to the data member. If automatic rebalancing is enabled
(partitioned region
+ attribute `startup-recovery-delay` is set to a value other than -1), data
restoration will start
+ automatically. If automatic rebalancing is disabled (partitioned region
attribute
+ `startup-recovery-delay=-1`), you must initiate data restoration by
issuing the gfsh `rebalance`
+ command.
+
+ Wait until the newly-started server has been restored before upgrading the
next server. You can repeat various gfsh
+ `show metrics` command with the `--member` option or the `--region` option
to verify that the data member is hosting data and that
+ the amount of data it is hosting has stabilized.
+
+3. Shut down,restart, and rebalance servers until all data members are running
the new version of GemFire.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+3. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+4. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+5. Stop the locator. For example:
+
+ ``` pre
+ gfsh>stop locator --name=locator1
+ Stopping Locator running in /Users/username/sandbox/locator on
172.16.71.1[10334] as locator...
+ Process ID: 96686
+ Log File: /Users/username/sandbox/locator/locator.log
+ ....
+ No longer connected to 172.16.71.1[1099].
+ ```
+6. Start `gfsh` from the new GemFire installation.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
Review comment:
Remove "GemFire"
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
Review comment:
Change the two appearance of `GFE 9.0.0` as in previous comment.
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+3. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+4. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+5. Stop the locator. For example:
+
+ ``` pre
+ gfsh>stop locator --name=locator1
+ Stopping Locator running in /Users/username/sandbox/locator on
172.16.71.1[10334] as locator...
+ Process ID: 96686
+ Log File: /Users/username/sandbox/locator/locator.log
+ ....
+ No longer connected to 172.16.71.1[1099].
+ ```
+6. Start `gfsh` from the new GemFire installation.
+ Verify that you are running the newer version with
+
+ ``` pre
+ gfsh>version
+ ```
+
+7. Start a locator and import the saved configuration. If you are using the
cluster configuration service, use the same name and directory as the older
version you stopped, and the new locator will access the old locator's cluster
configuration without having to import it in a separate step:
+
+ ```
+ gfsh>start locator --name=locator1 --enable-cluster-configuration=true
--dir=/data/locator1
+ ```
+
+ Otherwise, use the gfsh `import cluster-configuration` command or
explicitly import `.xml` and `.properties` files, as appropriate.
+
+8. The new locator should reconnect to the same members as the older locator.
Use `list members` to verify:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 |
172.16.71.1(locator1:26752:locator)<ec><v17>:1024(version:UNKNOWN[ordinal=65])
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+9. Upgrade the remaining locators by stopping and restarting them. When you
have completed that step, the system gives a more coherent view of version
numbers:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026(version:GFE 9.0)
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
+ ```
+
+ The server entries show that the servers are running an older version of
gemfire, in this case `(version:GFE 9.0)`.
+
+### <a id="upgrade-servers" class="no-quick-link"></a>Upgrade Servers
+
+After you have upgraded all of the system's locators, upgrade the servers.
+
+1. Upgrade each server, one at a time, by stopping it and restarting it.
Restart the server with the same command-line options with which it was
originally started in the previous installation. For example:
+
+ ```
+ gfsh>stop server --name=server1
+ Stopping Cache Server running in /Users/share/server1 on
172.16.71.1[52139] as server1...
+
+ gfsh>start server --name=server1 --use-cluster-configuration=true
--server-port=0 --dir=/data/server1
+ Starting a Geode Server in /Users/share/server1...
+ ```
+
+ Use the `list members` command to verify that the server is now running
the new version of GemFire:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26835)<v32>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
Review comment:
Change `GFE 9.0`
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+3. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+4. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+5. Stop the locator. For example:
+
+ ``` pre
+ gfsh>stop locator --name=locator1
+ Stopping Locator running in /Users/username/sandbox/locator on
172.16.71.1[10334] as locator...
+ Process ID: 96686
+ Log File: /Users/username/sandbox/locator/locator.log
+ ....
+ No longer connected to 172.16.71.1[1099].
+ ```
+6. Start `gfsh` from the new GemFire installation.
+ Verify that you are running the newer version with
+
+ ``` pre
+ gfsh>version
+ ```
+
+7. Start a locator and import the saved configuration. If you are using the
cluster configuration service, use the same name and directory as the older
version you stopped, and the new locator will access the old locator's cluster
configuration without having to import it in a separate step:
+
+ ```
+ gfsh>start locator --name=locator1 --enable-cluster-configuration=true
--dir=/data/locator1
+ ```
+
+ Otherwise, use the gfsh `import cluster-configuration` command or
explicitly import `.xml` and `.properties` files, as appropriate.
+
+8. The new locator should reconnect to the same members as the older locator.
Use `list members` to verify:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 |
172.16.71.1(locator1:26752:locator)<ec><v17>:1024(version:UNKNOWN[ordinal=65])
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+9. Upgrade the remaining locators by stopping and restarting them. When you
have completed that step, the system gives a more coherent view of version
numbers:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026(version:GFE 9.0)
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
Review comment:
Change `GFE 9.0`
##########
File path: geode-docs/getting_started/upgrade/upgrade_rolling.html.md.erb
##########
@@ -0,0 +1,201 @@
+---
+title: Rolling Upgrade
+---
+
+A rolling upgrade eliminates system downtime by keeping your existing
distributed system running while you upgrade one member at a time.
+Each upgraded member can communicate with other members that are still running
the earlier version of GemFire, so servers can respond to
+client requests even as the upgrade is underway. Interdependent data members
can be stopped and started without mutually blocking, a problem
+that can occur when multiple data members are stopped at the same time.
+
+## <a id="rolling-upgrade-limitations-requirements"
class="no-quick-link"></a>Rolling Upgrade Limitations and Requirements
+
+**Versions**
+
+Rolling upgrade requires that the older and newer versions of
<%=vars.product_name%> are mutually compatible, which usually means that they
+share the same major version number.
+
+See [Version Compatibilities](upgrade_planning.html#version_compatibilities)
+for more details on how different versions of <%=vars.product_name%> can
interoperate.
+
+**Components**
+
+Rolling upgrades apply to the peer members or cache servers within a
distributed system.
+Under some circumstances, rolling upgrades can also be applied within
individual sites of multi-site (WAN) deployments.
+
+**Redundancy**
+
+All partitioned regions in your system must have full redundancy.
+Check the redundancy state of all your regions *before* you begin the rolling
upgrade and *before* stopping any members.
+See [Checking Redundancy in Partitioned
Regions](../../developing/partitioned_regions/checking_region_redundancy.html)
for details.
+
+If a rolling update is not possible for your system, follow the [Off-Line
Upgrade](upgrade_offline.html) procedure.
+
+## <a id="rolling-upgrade-guidelines" class="no-quick-link"></a>Rolling
Upgrade Guidelines
+
+**Do not create or destroy regions**
+
+When you perform a rolling upgrade, your online cluster will have a mix of
members running different versions of GemFire.
+During this time period, do not execute region operations such as region
creation or region destruction.
+
+**Region rebalancing affects the restart process**
+
+If you have `startup-recovery-delay` disabled (set to -1) for your partitioned
region, you will need to perform a rebalance on your
+region after you restart each member.
+If rebalance occurs automatically, as it will if `startup-recovery-delay` is
enabled (set to a value other than -1), make sure that the rebalance completes
before you stop the next server.
+If you have `startup-recovery-delay` enabled and set to a high number, you may
need to wait extra time until the region has recovered redundancy, because
rebalance must complete before new servers are restarted.
+The partitioned region attribute `startup-recovery-delay` is described in
[Configure Member Join Redundancy Recovery for a Partitioned
Region](../../developing/partitioned_regions/set_join_redundancy_recovery.html).
+
+**Checking component versions while upgrading**
+
+During a rolling upgrade, you can check the current GemFire version of all
members in the cluster by looking at the server or locator logs.
+
+ When an upgraded member reconnects to the distributed system, it logs all
the members it can see as well as the GemFire version of those members. For
example, an upgraded locator will now detect GemFire members running the older
version of GemFire (in this case, the version being upgraded-- GFE 9.0.0) :
+
+``` pre
+[info 2013/06/03 10:03:29.206 PDT frodo <vm_1_thr_1_frodo> tid=0x1a]
DistributionManager frodo(locator1:21869:locator)<v16>:28242 started on
frodo[15001]. There
+ were 2 other DMs. others: [frodo(server2:21617)<v4>:14973( version:GFE
9.0.0 ), frodo(server1:21069)<v1>:60929( version:GFE 9.0.0 )] (locator)
+```
+
+After some members have been upgraded, non-upgraded members will log the
following message when they receive a new membership view:
+
+``` pre
+Membership: received new view [frodo(locator1:20786)<v0>:32240|4]
+ [frodo(locator1:20786)<v0>:32240/51878,
frodo(server1:21069)<v1>:60929/46949,
+ frodo(server2:21617)<v4>( version:UNKNOWN[ordinal=23] ):14973/33919]
+```
+
+ Non-upgraded members identify members that have been upgraded to the next
version with `version: UNKNOWN`.
+
+**Cluster configuration affects save and restore**
+
+The way in which your cluster configuration was created determines which
commands you use to save
+ and restore that cluster configuration during the upgrade procedure.
+
+ - If your system was configured with `gfsh` commands, relying on the
underlying **cluster configuration service**, the configuration can be saved in
one central location, then applied to all newly-upgraded members. See
[Exporting and Importing Cluster
Configurations](../../configuring/cluster_config/export-import.html).
+ - If your system was configured with **XML properties** specified through
the Java API or configuration files, you must save the configuration for each
member before you bring it down, then re-import it for that member's upgraded
counterpart. See [Deploying Configuration Files without the Cluster
Configuration Service](../../configuring/running/deploying_config_files.html).
+
+## <a id="rolling-upgrade-procedure" class="no-quick-link"></a>Rolling Upgrade
Procedure
+
+Begin by installing the new version of the software alongside the older
version of the software on all hosts. You will need both versions of the
software during the upgrade procedure.
+
+Upgrade locators first, then data members, then clients.
+
+### <a id="upgrade-locators" class="no-quick-link"></a>Upgrade Locators
+
+1. On the machine hosting the first locator you wish to upgrade, open a
terminal console.
+
+2. Start a `gfsh` prompt, using the version from your current GemFire
installation, and connect to the currently running locator.
+ For example:
+
+ ``` pre
+ gfsh>connect --locator=locator_hostname_or_ip_address[port]
+ ```
+
+3. Use `gfsh` commands to characterize your current installation so you can
compare your post-upgrade system to the current one.
+For example, use the `list members` command to view locators and data members:
+
+ ```
+ Name | Id
+ -------- | ------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26510:locator)<ec><v0>:1024
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+4. Save your cluster configuration.
+ - If you are using the cluster configuration service, use the gfsh `export
cluster-configuration` command. You only need to do this once, as the
newly-upgraded locator will propagate the configuration to newly-upgraded
members as they come online.
+ - For an XML configuration, save `cache.xml`, `gemfire.properties`, and any
other relevant configuration files to a well-known location. You must repeat
this step for each member you upgrade.
+
+5. Stop the locator. For example:
+
+ ``` pre
+ gfsh>stop locator --name=locator1
+ Stopping Locator running in /Users/username/sandbox/locator on
172.16.71.1[10334] as locator...
+ Process ID: 96686
+ Log File: /Users/username/sandbox/locator/locator.log
+ ....
+ No longer connected to 172.16.71.1[1099].
+ ```
+6. Start `gfsh` from the new GemFire installation.
+ Verify that you are running the newer version with
+
+ ``` pre
+ gfsh>version
+ ```
+
+7. Start a locator and import the saved configuration. If you are using the
cluster configuration service, use the same name and directory as the older
version you stopped, and the new locator will access the old locator's cluster
configuration without having to import it in a separate step:
+
+ ```
+ gfsh>start locator --name=locator1 --enable-cluster-configuration=true
--dir=/data/locator1
+ ```
+
+ Otherwise, use the gfsh `import cluster-configuration` command or
explicitly import `.xml` and `.properties` files, as appropriate.
+
+8. The new locator should reconnect to the same members as the older locator.
Use `list members` to verify:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 |
172.16.71.1(locator1:26752:locator)<ec><v17>:1024(version:UNKNOWN[ordinal=65])
+ locator2 | 172.16.71.1(locator2:26511:locator)<ec><v1>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026
+ server2 | 172.16.71.1(server2:26518)<v3>:1027
+ ```
+
+9. Upgrade the remaining locators by stopping and restarting them. When you
have completed that step, the system gives a more coherent view of version
numbers:
+
+ ```
+ gfsh>list members
+ Name | Id
+ -------- | ----------------------------------------------------
+ locator1 | 172.16.71.1(locator1:26752:locator)<ec><v17>:1024
+ locator2 | 172.16.71.1(locator2:26808:locator)<ec><v30>:1025
+ server1 | 172.16.71.1(server1:26514)<v2>:1026(version:GFE 9.0)
+ server2 | 172.16.71.1(server2:26518)<v3>:1027(version:GFE 9.0)
+ ```
+
+ The server entries show that the servers are running an older version of
gemfire, in this case `(version:GFE 9.0)`.
Review comment:
Remove "gemfire" and change `GFE 9.0`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]