This is an automated email from the ASF dual-hosted git repository.

epugh pushed a commit to branch branch_9x
in repository https://gitbox.apache.org/repos/asf/solr.git


The following commit(s) were added to refs/heads/branch_9x by this push:
     new 81d766272d1 SOLR-17468: Put SolrCloud docs first in Ref Guide (#2730)
81d766272d1 is described below

commit 81d766272d1269146697b6f1d170fc7453650408
Author: Eric Pugh <[email protected]>
AuthorDate: Fri Oct 18 21:07:04 2024 -0400

    SOLR-17468: Put SolrCloud docs first in Ref Guide (#2730)
    
    For each place where we refer to Cloud and non cloud, move Cloud first.  
Likewise where we refer to something that impacts collections and cores, make 
sure Collection is first.  This aligns with us moving to running in Cloud mode 
by default.
    
    (cherry picked from commit b6e5f9f08339b1407217a2e8d8ec106cd0d0f587)
---
 dev-docs/apis.adoc                                 |  1 -
 .../configuration-guide/pages/config-sets.adoc     | 74 +++++++++++-----------
 .../modules/deployment-guide/deployment-nav.adoc   |  6 +-
 .../deployment-guide/pages/cluster-types.adoc      |  4 +-
 .../deployment-guide/pages/enabling-ssl.adoc       | 38 +++++------
 .../deployment-guide/pages/installing-solr.adoc    | 18 +++---
 .../monitoring-with-prometheus-and-grafana.adoc    |  2 +-
 .../deployment-guide/pages/rate-limiters.adoc      |  2 +-
 .../pages/solr-control-script-reference.adoc       | 20 +++---
 .../deployment-guide/pages/solr-in-docker.adoc     | 70 ++++++++++----------
 .../deployment-guide/pages/solr-on-hdfs.adoc       | 37 ++++++-----
 .../modules/deployment-guide/pages/solrj.adoc      |  8 +++
 .../pages/user-managed-index-replication.adoc      |  2 +-
 .../getting-started/pages/introduction.adoc        |  2 +-
 .../modules/indexing-guide/pages/schema-api.adoc   |  2 +-
 15 files changed, 147 insertions(+), 139 deletions(-)

diff --git a/dev-docs/apis.adoc b/dev-docs/apis.adoc
index 49ff7df5a11..527f39141fe 100644
--- a/dev-docs/apis.adoc
+++ b/dev-docs/apis.adoc
@@ -80,4 +80,3 @@ A good example for each of these steps can be seen in Solr's 
v2 "add-replica-pro
 While we've settled on JAX-RS as our framework for defining v2 APIs going 
forward, Solr still retains many v2 APIs that were written using an older 
homegrown framework.
 This framework defines APIs using annotations (e.g. `@EndPoint`) similar to 
those used by JAX-RS, but lacks the full range of features and 3rd-party 
tooling.
 We're in the process of migrating these API definitions to JAX-RS and hope to 
remove all support for this legacy framework in a future release.
-
diff --git 
a/solr/solr-ref-guide/modules/configuration-guide/pages/config-sets.adoc 
b/solr/solr-ref-guide/modules/configuration-guide/pages/config-sets.adoc
index 445f9cdb97d..57707e94c6e 100644
--- a/solr/solr-ref-guide/modules/configuration-guide/pages/config-sets.adoc
+++ b/solr/solr-ref-guide/modules/configuration-guide/pages/config-sets.adoc
@@ -24,9 +24,45 @@ Such configuration, _configsets_, can be named and then 
referenced by collection
 Solr ships with two example configsets located in `server/solr/configsets`, 
which can be used as a base for your own.
 These example configsets are named `_default` and 
`sample_techproducts_configs`.
 
+== Configsets in SolrCloud Clusters
+
+In SolrCloud, it's critical to understand that configsets are stored in 
ZooKeeper _and not_ in the file system.
+Solr's `_default` configset is uploaded to ZooKeeper on initialization.
+This and a couple of example configsets remain on the file system but Solr 
does not use them unless they are used with a new collection.
+
+When you create a collection in SolrCloud, you can specify a named configset.
+If you don't, then the `_default` will be copied and given a unique name for 
use by the new collection.
+
+A configset can be uploaded to ZooKeeper either via the 
xref:configsets-api.adoc[] or more directly via 
xref:deployment-guide:solr-control-script-reference.adoc#upload-a-configuration-set[`bin/solr
 zk upconfig`].
+The Configsets API has some other operations as well, and likewise, so does 
the CLI.
+
+To upload a file to a configset already stored on ZooKeeper, you can use 
xref:deployment-guide:solr-control-script-reference.adoc#copy-between-local-files-and-zookeeper-znodes[`bin/solr
 zk cp`].
+
+CAUTION: By default, ZooKeeper's file size limit is 1MB.
+If your files are larger than this, you'll need to either 
xref:deployment-guide:zookeeper-ensemble.adoc#increasing-the-file-size-limit[increase
 the ZooKeeper file size limit] or store them 
xref:libs.adoc#lib-directives-in-solrconfig[on the filesystem] of every node in 
a cluster.
+
+=== Forbidden File Types
+
+Solr does not accept all file types when uploading or downloading configSets.
+By default the excluded file types are:
+
+- `class`
+- `java`
+- `jar`
+- `tgz`
+- `zip`
+- `tar`
+- `gz`
+
+However, users can impose stricter or looser limits on their systems by 
providing a comma separated list of file types
+(without the preceding dot, e.g. `jar,class,csv`), to either of the following 
settings:
+
+- System Property: `-DsolrConfigSetForbiddenFileTypes`
+- Environment Variable: `SOLR_CONFIG_SET_FORBIDDEN_FILE_TYPES`
+
 == Configsets in User-Managed Clusters or Single-Node Installations
 
-If you are using Solr in a user-managed cluster or a single-node installation, 
configsets are managed on the filesystem.
+If you are using Solr in a If you are using Solr in a user-managed cluster or 
a single-node installation, configsets are managed on the filesystem. 
installation, configsets are managed on the filesystem.
 
 Each Solr core can have it's very own configset located beneath it in a 
`<instance_dir>/conf/` dir.
 Here, it is not named or shared and the word _configset_ isn't found.
@@ -81,39 +117,3 @@ curl -v -X POST -H 'Content-type: application/json' -d '{
 ----
 ====
 ======
-
-== Configsets in SolrCloud Clusters
-
-In SolrCloud, it's critical to understand that configsets are stored in 
ZooKeeper _and not_ the file system.
-Solr's `_default` configset is uploaded to ZooKeeper on initialization.
-This and a couple of example configsets remain on the file system but Solr 
does not use them unless they are used with a new collection.
-
-When you create a collection in SolrCloud, you can specify a named configset.
-If you don't, then the `_default` will be copied and given a unique name for 
use by the new collection.
-
-A configset can be uploaded to ZooKeeper either via the 
xref:configsets-api.adoc[] or more directly via 
xref:deployment-guide:solr-control-script-reference.adoc#upload-a-configuration-set[`bin/solr
 zk upconfig`].
-The Configsets API has some other operations as well, and likewise, so does 
the CLI.
-
-To upload a file to a configset already stored on ZooKeeper, you can use 
xref:deployment-guide:solr-control-script-reference.adoc#copy-between-local-files-and-zookeeper-znodes[`bin/solr
 zk cp`].
-
-CAUTION: By default, ZooKeeper's file size limit is 1MB.
-If your files are larger than this, you'll need to either 
xref:deployment-guide:zookeeper-ensemble.adoc#increasing-the-file-size-limit[increase
 the ZooKeeper file size limit] or store them 
xref:libs.adoc#lib-directives-in-solrconfig[on the filesystem] of every node in 
a cluster.
-
-=== Forbidden File Types
-
-Solr does not accept all file types when uploading or downloading configSets.
-By default the excluded file types are:
-
-- `class`
-- `java`
-- `jar`
-- `tgz`
-- `zip`
-- `tar`
-- `gz`
-
-However, users can impose stricter or looser limits on their systems by 
providing a comma separated list of file types
-(without the preceding dot, e.g. `jar,class,csv`), to either of the following 
settings:
-
-- System Property: `-DsolrConfigSetForbiddenFileTypes`
-- Environment Variable: `SOLR_CONFIG_SET_FORBIDDEN_FILE_TYPES`
diff --git a/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
index cdf02d39dde..a41206030a7 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
@@ -32,9 +32,6 @@
 
 * Scaling Solr
 ** xref:cluster-types.adoc[]
-** User-Managed Clusters
-*** xref:user-managed-index-replication.adoc[]
-*** xref:user-managed-distributed-search.adoc[]
 ** SolrCloud Clusters
 *** xref:solrcloud-shards-indexing.adoc[]
 *** xref:solrcloud-recoveries-and-write-tolerance.adoc[]
@@ -55,6 +52,9 @@
 *** Admin UI
 **** xref:collections-core-admin.adoc[]
 **** xref:cloud-screens.adoc[]
+** User-Managed Clusters
+*** xref:user-managed-index-replication.adoc[]
+*** xref:user-managed-distributed-search.adoc[]
 
 * Monitoring Solr
 ** xref:configuring-logging.adoc[]
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-types.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-types.adoc
index 913c90d73c8..d8583085adb 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-types.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-types.adoc
@@ -21,6 +21,8 @@ A Solr cluster is a group of servers (_nodes_) that each run 
Solr.
 There are two general modes of operating a cluster of Solr nodes.
 One mode provides central coordination of the Solr nodes (<<SolrCloud Mode>>), 
while the other allows you to operate a cluster without this central 
coordination (<<User-Managed Mode>>).
 
+TIP: "User Managed" and "Single Node" are sometimes referred to as 
"Standalone", especially in source code.
+
 Both modes share general concepts, but ultimately differ in how those concepts 
are reflected in functionality and features.
 
 First let's cover a few general concepts and then outline the differences 
between the two modes.
@@ -88,7 +90,7 @@ As long as one replica of each relevant shard is available, a 
user query or inde
 
 == User-Managed Mode
 
-Solr's user-managed mode requires that cluster coordination activities that 
SolrCloud normally uses ZooKeeper for are performed manually or with local 
scripts.
+Solr's user-managed mode requires that cluster coordination activities that 
SolrCloud normally uses ZooKeeper for tp be performed manually or with local 
scripts.
 
 If the corpus of documents is too large for a single-sharded index, the logic 
to create shards is entirely left to the user.
 There are no automated or programmatic ways for Solr to create shards during 
indexing.
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc
index cbfc9d3f85c..55aa2bde610 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc
@@ -132,9 +132,11 @@ They are mutually exclusive and Jetty will select one of 
them which may not be w
 
 When you start Solr, the `bin/solr` script includes these settings and will 
pass them as system properties to the JVM.
 
+If you are using SolrCloud, you need to <<Configure ZooKeeper>> before 
starting Solr.
+
 If you are running Solr in a user-managed cluster or single-node installation, 
you can skip to <<Start User-Managed Cluster or Single-Node Solr>>.
 
-If you are using SolrCloud, however, you need to <<Configure ZooKeeper>> 
before starting Solr.
+
 
 === Password Distribution via Hadoop Credential Store
 
@@ -268,19 +270,23 @@ Once this and all other steps are complete, you can go 
ahead and start Solr.
 
 == Starting Solr After Enabling SSL
 
-=== Start User-Managed Cluster or Single-Node Solr
+=== Start SolrCloud
 
-Start Solr using the Solr control script as shown in the examples below.
-Customize the values for the parameters shown as needed and add any used in 
your system.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see 
xref:zookeeper-ensemble.adoc#updating-solr-include-files[Updating Solr Include 
Files]) you can omit `-z <zk host string>` from all of the 
`bin/solr`/`bin\solr.cmd` commands below.
 
-[tabs#single]
+Start each Solr node with the Solr control script as shown in the examples 
below.
+Customize the values for the parameters shown as necessary and add any used in 
your system.
+
+If you created the SSL key without all DNS names or IP addresses on which Solr 
nodes run, you can tell Solr to skip hostname verification for inter-node 
communications by setting the `-Dsolr.ssl.checkPeerName=false` system property.
+
+[tabs#cloud]
 ======
 *nix::
 +
 ====
 [source,terminal]
 ----
-$ bin/solr start -p 8984
+$ bin/solr start --cloud --solr-home cloud/node1 -z 
server1:2181,server2:2181,server3:2181 -p 8984
 ----
 ====
 
@@ -289,28 +295,25 @@ Windows::
 ====
 [source,powershell]
 ----
-C:\> bin\solr.cmd -p 8984
+C:\> bin\solr.cmd --cloud --solr-home cloud\node1 -z 
server1:2181,server2:2181,server3:2181
+
 ----
 ====
 ======
 
-=== Start SolrCloud
-
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see 
xref:zookeeper-ensemble.adoc#updating-solr-include-files[Updating Solr Include 
Files]) you can omit `-z <zk host string>` from all of the 
`bin/solr`/`bin\solr.cmd` commands below.
-
-Start each Solr node with the Solr control script as shown in the examples 
below.
-Customize the values for the parameters shown as necessary and add any used in 
your system.
+=== Start User-Managed Cluster or Single-Node Solr
 
-If you created the SSL key without all DNS names or IP addresses on which Solr 
nodes run, you can tell Solr to skip hostname verification for inter-node 
communications by setting the `-Dsolr.ssl.checkPeerName=false` system property.
+Start Solr using the Solr control script as shown in the examples below.
+Customize the values for the parameters shown as needed and add any used in 
your system.
 
-[tabs#cloud]
+[tabs#single]
 ======
 *nix::
 +
 ====
 [source,terminal]
 ----
-$ bin/solr start --cloud --solr-home cloud/node1 -z 
server1:2181,server2:2181,server3:2181 -p 8984
+$ bin/solr start -p 8984
 ----
 ====
 
@@ -319,8 +322,7 @@ Windows::
 ====
 [source,powershell]
 ----
-C:\> bin\solr.cmd --cloud --solr-home cloud\node1 -z 
server1:2181,server2:2181,server3:2181
-
+C:\> bin\solr.cmd -p 8984
 ----
 ====
 ======
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc
index cb1cbc50d72..1ecd012da69 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc
@@ -169,14 +169,14 @@ To use it to start Solr you can simply enter:
 
 [source,bash]
 ----
-bin/solr start
+bin/solr start --cloud
 ----
 
 If you are running Windows, you can start Solr by running `bin\solr.cmd` 
instead.
 
 [source,plain]
 ----
-bin\solr.cmd start
+bin\solr.cmd start --cloud
 ----
 
 This will start Solr in the background, listening on port 8983.
@@ -193,14 +193,14 @@ For instance, to launch the "techproducts" example, you 
would do:
 
 [source,bash]
 ----
-bin/solr start -e techproducts
+bin/solr start --cloud -e techproducts
 ----
 
 Currently, the available examples you can run are: techproducts, schemaless, 
and cloud.
 See the section 
xref:solr-control-script-reference.adoc#running-with-example-configurations[Running
 with Example Configurations] for details on each example.
 
-.Getting Started with SolrCloud
-NOTE: Running the `cloud` example starts Solr in 
xref:cluster-types.adoc#solrcloud-mode[SolrCloud] mode.
+.Going deeper with SolrCloud
+NOTE: Running the `cloud` example demonstrates running multiple nodes of Solr 
using xref:cluster-types.adoc#solrcloud-mode[SolrCloud] mode.
 For more information on starting Solr in SolrCloud mode, see the section 
xref:getting-started:tutorial-solrcloud.adoc[].
 
 === Check if Solr is Running
@@ -225,9 +225,9 @@ image::installing-solr/SolrAdminDashboard.png[Solr's Admin 
UI,pdfwidth=75%]
 If Solr is not running, your browser will complain that it cannot connect to 
the server.
 Check your port number and try again.
 
-=== Create a Core
+=== Create a Collection
 
-If you did not start Solr with an example configuration, you would need to 
create a core in order to be able to index and search.
+If you did not start Solr with an example configuration, you would need to 
create a collection in order to be able to index and search.
 You can do so by running:
 
 [source,bash]
@@ -235,9 +235,9 @@ You can do so by running:
 bin/solr create -c <name>
 ----
 
-This will create a core that uses a data-driven schema which tries to guess 
the correct field type when you add documents to the index.
+This will create a collection that uses a data-driven schema which tries to 
guess the correct field type when you add documents to the index.
 
-To see all available options for creating a new core, execute:
+To see all available options for creating a new collection, execute:
 
 [source,bash]
 ----
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
 
b/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
index ebf405999be..7ca6497ea03 100644
--- 
a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
+++ 
b/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
@@ -124,7 +124,7 @@ It can be any port not already in use on your server.
 |Optional |Default: _see description_
 |===
 +
-The Solr base URL (such as `\http://localhost:8983/solr`) when Solr is running 
in a user-managed cluster or a single-node installation.
+The Solr base URL (such as `\http://localhost:8983/solr`) when Solr is running 
in a  The Solr base URL (such as `\http://localhost:8983/solr`) when Solr is 
running in a user-managed cluster or a single-node installation.
 If you are running SolrCloud, do not specify this parameter.
 If neither the `-b` parameter nor the `-z` parameter are defined, the default 
is `-b \http://localhost:8983/solr`.
 
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/rate-limiters.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/rate-limiters.adoc
index e36715830af..66268a1f998 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/rate-limiters.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/rate-limiters.adoc
@@ -22,7 +22,7 @@ The default rate limiting is implemented for updates and 
searches.
 
 If a request exceeds the request quota, further incoming requests are rejected 
with HTTP error code 429 (Too Many Requests).
 
-Note that rate limiting works at an instance (JVM) level, not at a core or 
collection level.
+Note that rate limiting works at an instance (JVM) level, not at a collection 
or core level.
 Consider that when planning capacity.
 There is future work planned to have finer grained execution here 
(https://issues.apache.org/jira/browse/SOLR-14710[SOLR-14710]).
 
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
 
b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
index c5e4b841b77..2cfa4438d2d 100644
--- 
a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
+++ 
b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
@@ -623,16 +623,16 @@ Below is an example healthcheck request and response 
using a non-standard ZooKee
 
 The `bin/solr` script can also help you create new collections or cores, or 
delete collections or cores.
 
-=== Create a Core or Collection
+=== Create a Collection or Core
 
-The `create` command creates a core or collection depending on whether Solr is 
running in standalone (core) or SolrCloud mode (collection).
+The `create` command creates a core or collection depending on whether Solr is 
running in SolrCloud (collection) or user-managed mode (core).
 In other words, this action detects which mode Solr is running in, and then 
takes the appropriate action (either `create_core` or `create_collection`).
 
 `bin/solr create [options]`
 
 `bin/solr create --help`
 
-==== Create Core or Collection Parameters
+==== Create Collection or Core Parameters
 
 `-c <name>`::
 +
@@ -641,7 +641,7 @@ In other words, this action detects which mode Solr is 
running in, and then take
 s|Required |Default: none
 |===
 +
-Name of the core or collection to create.
+Name of the collection or core to create.
 +
 *Example*: `bin/solr create -c mycollection`
 
@@ -666,7 +666,7 @@ See the section <<Configuration Directories and SolrCloud>> 
below for more detai
 |===
 +
 The configuration name.
-This defaults to the same name as the core or collection.
+This defaults to the same name as the collection or core.
 +
 *Example*: `bin/solr create -n basic`
 
@@ -945,9 +945,9 @@ $ bin/solr config -c mycollection --action 
set-user-property --property update.a
 
 See also the section <<Set or Unset Configuration Properties>>.
 
-=== Delete Core or Collection
+=== Delete Collection or Core
 
-The `delete` command detects the mode that Solr is running in and then deletes 
the specified core (user-managed or single-node) or collection (SolrCloud) as 
appropriate.
+The `delete` command detects the mode that Solr is running in and then deletes 
the specified collection (SolrCloud) or core (user-managed or single-node) as 
appropriate.
 
 `bin/solr delete [options]`
 
@@ -958,7 +958,7 @@ If you're deleting a collection in SolrCloud mode, the 
default behavior is to al
 For example, if you created a collection with `bin/solr create -c contacts`, 
then the delete command `bin/solr delete -c contacts` will check to see if the 
`/configs/contacts` configuration directory is being used by any other 
collections.
 If not, then the `/configs/contacts` directory is removed from ZooKeeper.  You 
can override this behavior by passing `--delete-config false` when running this 
command.atom
 
-==== Delete Core or Collection Parameters
+==== Delete Collection or Core Parameters
 
 `-c <name>`::
 +
@@ -967,7 +967,7 @@ If not, then the `/configs/contacts` directory is removed 
from ZooKeeper.  You c
 s|Required |Default: none
 |===
 +
-Name of the core or collection to delete.
+Name of the collection or core to delete.
 +
 *Example*: `bin/solr delete -c mycoll`
 
@@ -1207,7 +1207,7 @@ To unset a previously set user-defined property, specify 
`--action unset-user-pr
 s|Required |Default: none
 |===
 +
-Name of the core or collection on which to change configuration.
+Name of the collection or core on which to change configuration.
 
 `--action <name>`::
 +
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
index a194513011a..16031081327 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
@@ -61,7 +61,7 @@ This gives you a default search for `*:*` which returns all 
docs.
 Hit the "Execute Query" button, and you should see a few docs with data.
 Congratulations!
 
-=== Docker-Compose
+=== Docker Compose
 
 You can use Docker Compose to run a single standalone server or a multi-node 
cluster.
 And you could use Docker Volumes instead of host-mounted directories.
@@ -69,7 +69,6 @@ For example, with a `docker-compose.yml` containing the 
following:
 
 [source,yaml]
 ----
-version: '3'
 services:
   solr:
     image: solr
@@ -97,7 +96,6 @@ name `zoo`.:
 
 [source,yaml]
 ----
-version: '3'
 services:
   solr:
     image: solr:9-slim
@@ -120,7 +118,7 @@ networks:
 ----
 
 
-=== Single-Command Demo
+=== Single Command Demo
 
 For quick demos of Solr docker, there is a single command that starts Solr, 
creates a collection called "demo", and loads sample data into it:
 
@@ -140,6 +138,38 @@ See below for examples.
 
 The Solr docker distribution adds scripts in `/opt/solr/docker/scripts` to 
make it easier to use under Docker, for example to create cores on container 
startup.
 
+=== Creating Collections
+
+In a "SolrCloud" cluster you create "collections" to store data; and again you 
have several options for creating a core.
+
+These examples assume you're running a xref:docker-compose[docker compose 
cluster].
+
+The first way to create a collection is to go to the 
http://localhost:8983/[Solr Admin UI], select "Collections" from the left-hand 
side navigation menu, then press the "Add Collection" button, give it a name, 
select the `_default` config set, then press the "Add Collection" button.
+
+The second way is through the Solr control script on one of the containers:
+
+[source,bash]
+----
+docker exec solr1 solr create -c gettingstarted2
+----
+
+The third way is to use a separate container:
+
+[source,bash]
+----
+docker run -e SOLR_HOST=solr1 --network docs_solr solr solr create -c 
gettingstarted3 -p 8983
+----
+
+The fourth way is to use the remote API, from the host or from one of the 
containers, or some new container on the same network (adjust the hostname 
accordingly):
+
+[source,bash]
+----
+curl 
'http://localhost:8983/solr/admin/collections?action=CREATE&name=gettingstarted3&numShards=1&collection.configName=_default'
+----
+
+If you want to use a custom configuration for your collection, you first need 
to upload it, and then refer to it by name when you create the collection.
+You can use the 
xref:solr-control-script-reference.adoc#upload-a-configuration-set[`bin/solr 
zk` command] or the 
xref:configuration-guide:configsets-api.adoc#configsets-upload[Configsets API].
+
 === Creating Cores
 
 When Solr runs in standalone mode, you create "cores" to store data.
@@ -198,38 +228,6 @@ For example:
 docker run -p 8983:8983 -v $PWD/mysetup.sh:/mysetup.sh --name my_solr solr 
bash -c "precreate-core gettingstarted && source /mysetup.sh && solr-foreground"
 ----
 
-=== Creating Collections
-
-In a "SolrCloud" cluster you create "collections" to store data; and again you 
have several options for creating a core.
-
-These examples assume you're running a xref:docker-compose[docker compose 
cluster].
-
-The first way to create a collection is to go to the 
http://localhost:8983/[Solr Admin UI], select "Collections" from the left-hand 
side navigation menu, then press the "Add Collection" button, give it a name, 
select the `_default` config set, then press the "Add Collection" button.
-
-The second way is through the Solr control script on one of the containers:
-
-[source,bash]
-----
-docker exec solr1 solr create -c gettingstarted2
-----
-
-The third way is to use a separate container:
-
-[source,bash]
-----
-docker run -e SOLR_HOST=solr1 --network docs_solr solr solr create_collection 
-c gettingstarted3 -p 8983
-----
-
-The fourth way is to use the remote API, from the host or from one of the 
containers, or some new container on the same network (adjust the hostname 
accordingly):
-
-[source,bash]
-----
-curl 
'http://localhost:8983/solr/admin/collections?action=CREATE&name=gettingstarted3&numShards=1&collection.configName=_default'
-----
-
-If you want to use a custom configuration for your collection, you first need 
to upload it, and then refer to it by name when you create the collection.
-You can use the 
xref:solr-control-script-reference.adoc#upload-a-configuration-set[`bin/solr 
zk` command] or the 
xref:configuration-guide:configsets-api.adoc#configsets-upload[Configsets API].
-
 === Loading Your Own Data
 
 There are several ways to load data; let's look at the most common ones.
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-on-hdfs.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-on-hdfs.adoc
index 0242439e2af..f2b7f5a4f63 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-on-hdfs.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-on-hdfs.adoc
@@ -36,44 +36,43 @@ This is provided via the `hdfs` 
xref:configuration-guide:solr-modules.adoc[Solr
 
 == Starting Solr on HDFS
 
-=== User-Managed Cluters and Single-Node Installations
+=== SolrCloud Installations
 
-For user-managed clusters or single-node Solr installations, there are a few 
parameters you should modify before starting Solr.
-These can be set in `solrconfig.xml` (more on that <<HdfsDirectoryFactory 
Parameters,below>>), or passed to the `bin/solr` script at startup.
+In SolrCloud mode, it's best to leave the data and update log directories as 
the defaults Solr comes with and simply specify the `solr.hdfs.home`.
+All dynamically created collections will create the appropriate directories 
automatically under the `solr.hdfs.home` root directory.
 
-* You need to use an `HdfsDirectoryFactory` and a data directory in the form 
`hdfs://host:port/path`
-* You need to specify an `updateLog` location in the form 
`hdfs://host:port/path`
+* Set `solr.hdfs.home` in the form `hdfs://host:port/path`
 * You should specify a lock factory type of `'hdfs'` or none.
 
-If you do not modify `solrconfig.xml`, you can instead start Solr on HDFS with 
the following command:
-
 [source,bash]
 ----
-bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
+bin/solr start --cloud -Dsolr.directoryFactory=HdfsDirectoryFactory
      -Dsolr.lock.type=hdfs
-     -Dsolr.data.dir=hdfs://host:port/path
-     -Dsolr.updatelog=hdfs://host:port/path
+     -Dsolr.hdfs.home=hdfs://host:port/path
 ----
 
-This example will start Solr using the defined JVM properties (explained in 
more detail <<HdfsDirectoryFactory Parameters,below>>).
+This command starts Solr using the defined JVM properties.
 
-=== SolrCloud Instances
+=== User-Managed Cluters and Single-Node Installations
 
-In SolrCloud mode, it's best to leave the data and update log directories as 
the defaults Solr comes with and simply specify the `solr.hdfs.home`.
-All dynamically created collections will create the appropriate directories 
automatically under the `solr.hdfs.home` root directory.
+For  For user-managed clusters or single-node Solr installations, there are a 
few parameters you should modify before starting Solr.
+These can be set in `solrconfig.xml` (more on that <<HdfsDirectoryFactory 
Parameters,below>>), or passed to the `bin/solr` script at startup.
 
-* Set `solr.hdfs.home` in the form `hdfs://host:port/path`
+* You need to use an `HdfsDirectoryFactory` and a data directory in the form 
`hdfs://host:port/path`
+* You need to specify an `updateLog` location in the form 
`hdfs://host:port/path`
 * You should specify a lock factory type of `'hdfs'` or none.
 
+If you do not modify `solrconfig.xml`, you can instead start Solr on HDFS with 
the following command:
+
 [source,bash]
 ----
-bin/solr start -c -Dsolr.directoryFactory=HdfsDirectoryFactory
+bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
      -Dsolr.lock.type=hdfs
-     -Dsolr.hdfs.home=hdfs://host:port/path
+     -Dsolr.data.dir=hdfs://host:port/path
+     -Dsolr.updatelog=hdfs://host:port/path
 ----
 
-This command starts Solr using the defined JVM properties.
-
+This example will start Solr using the defined JVM properties (explained in 
more detail <<HdfsDirectoryFactory Parameters,below>>).
 
 === Modifying solr.in.sh (*nix) or solr.in.cmd (Windows)
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc 
b/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc
index b14bd2b3c21..35427fc2772 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc
@@ -161,6 +161,14 @@ Additionally, you will need to depend on the 
`solr-solrj-zookeeper` artifact or
 
 The ZooKeeper based connection is the most reliable and performant means for 
CloudSolrClient to work.  On the other hand, it means exposing ZooKeeper more 
broadly than to Solr nodes, which is a security risk.  It also adds more JAR 
dependencies.
 
+==== Default Collections
+
+Most `SolrClient` methods allow users to specify the collection or core they 
wish to query, etc. as a `String` parameter.
+However continually specifying this parameter can become tedious, especially 
for users who always work with the same collection.
+
+Users can avoid this pattern by specifying a "default" collection when 
creating their client, using the `withDefaultCollection(String)` method 
available on the relevant `SolrClient` Builder object.
+If specified on a Builder, the created `SolrClient` will use this default for 
making requests whenever a collection or core is needed (and no overriding 
value is specified).
+
 ==== Timeouts
 All `SolrClient` implementations allow users to specify the connection and 
read timeouts for communicating with Solr.
 These are provided at client creation time, as in the example below:
diff --git 
a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
 
b/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
index 0f981c557d7..1a65a8b9aea 100644
--- 
a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
+++ 
b/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
@@ -1,4 +1,4 @@
-= User-Managed Index Replication
+= User-Managed Index Replication Index Replication
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
diff --git 
a/solr/solr-ref-guide/modules/getting-started/pages/introduction.adoc 
b/solr/solr-ref-guide/modules/getting-started/pages/introduction.adoc
index 387cb595c5f..c29091ac2a6 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/introduction.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/introduction.adoc
@@ -41,7 +41,7 @@ Several xref:deployment-guide:client-apis.adoc[] are provided 
for use in common
 
 In addition to providing a network accessible engine for Lucene based document 
retrieval, Solr provides the ability to scale beyond the limitations of a 
single machine.
 Indexes can be sharded and replicated for performance and reliability, using 
either one of two xref:deployment-guide:cluster-types.adoc[].
-One type of cluster requires no supporting infrastructure, and instances are 
managed directly by administrators. The second type uses 
https://zookeeper.apache.org/[Apache Zookeeper^TM^] to coordinate management 
activities across the cluster.
+The most scalable option uses https://zookeeper.apache.org/[Apache 
Zookeeper^TM^] to coordinate management activities across the cluster. The 
older approach requires no supporting infrastructure, however instances are 
managed directly by administrators. 
 
 Solr scaling and high availability features are so effective that some of the 
largest and most famous internet sites use Solr.
 A partial, typically self nominated, list of sites using Solr can be found at 
https://solr.apache.org/community.html#powered-by.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc 
b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc
index 0dd1656ab5f..f5dc074e424 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc
@@ -33,7 +33,7 @@ See the section 
xref:configuration-guide:schema-factory.adoc[] for more informat
 The file named "managed-schema.xml" in the example configurations may include 
a note that recommends never hand-editing the file.
 Before the Schema API existed, such edits were the only way to make changes to 
the schema, and users may have a strong desire to continue making changes this 
way.
 
-The reason that this is discouraged is because hand-edits of the schema may be 
lost if the Schema API described here is later used to make a change, unless 
the core or collection is reloaded or Solr is restarted before using the Schema 
API.
+The reason that this is discouraged is because hand-edits of the schema may be 
lost if the Schema API described here is later used to make a change, unless 
the collection or core is reloaded or Solr is restarted before using the Schema 
API.
 If care is taken to always reload or restart after a manual edit, then there 
is no problem at all with doing those edits.
 
 Prior to Solr 9, this xml file was referred to as `managed-schema` with no 
file extension.


Reply via email to