This is an automated email from the ASF dual-hosted git repository.

vogievetsky pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new f97bcc69d3 Docs: reword single server page (#13659)
f97bcc69d3 is described below

commit f97bcc69d3384ce4a1310a719f965b33565ed41a
Author: Vadim Ogievetsky <[email protected]>
AuthorDate: Wed Jan 11 21:12:52 2023 -0800

    Docs: reword single server page (#13659)
    
    * reword single server page
    
    * fix typo
    
    * Update docs/operations/single-server.md
    
    Co-authored-by: Charles Smith <[email protected]>
    
    * spelling
    
    Co-authored-by: Charles Smith <[email protected]>
---
 docs/operations/single-server.md        | 77 +++++++++------------------------
 docs/tutorials/docker.md                |  2 +-
 docs/tutorials/index.md                 |  7 +--
 docs/tutorials/tutorial-batch-hadoop.md |  2 +-
 docs/tutorials/tutorial-kafka.md        |  2 +-
 5 files changed, 25 insertions(+), 65 deletions(-)

diff --git a/docs/operations/single-server.md b/docs/operations/single-server.md
index 48459a2860..6f9a0ebd3d 100644
--- a/docs/operations/single-server.md
+++ b/docs/operations/single-server.md
@@ -22,18 +22,31 @@ title: "Single server deployment"
   ~ under the License.
   -->
 
+Druid includes a launch script, `bin/start-druid` that automatically sets 
various memory-related parameters based on available processors and memory.
+It accepts optional arguments such as list of services, total memory, and a 
config directory to override default JVM arguments and service-specific runtime 
properties.
+
+By default, the services started by `bin/start-druid`:
+
+- use all processors
+- can use up to 80% memory on the system
+- apply the configuration files in `conf/druid/auto` for all other settings.
+
+For details about possible arguments, run `bin/start-druid --help`.
+
+## Single server reference configurations (deprecated)
 
 Druid includes a set of reference configurations and launch scripts for 
single-machine deployments.
+These start scripts are deprecated in favor of the `bin/start-druid` script 
documented above.
 These configuration bundles are located in `conf/druid/single-server/`.
 
-The `auto` configuration sizes runtime parameters based on available 
processors and memory. Other configurations include hard-coded runtime 
parameters for various server sizes. Most users should stick with `auto`. Refer 
below [Druid auto start](#druid-auto-start)
-- `auto` (run script: `bin/start-druid`)
-- `nano-quickstart` (run script: `bin/start-nano-quickstart`)
-- `micro-quickstart` (run script: `bin/start-micro-quickstart`)
-- `small` (run script: `bin/start-single-server-small`)
-- `medium` (run script: `bin/start-single-server-medium`)
-- `large` (run script: `bin/start-single-server-large`)
-- `xlarge` (run script: `bin/start-single-server-xlarge`)
+| Configuration      |Sizing|Launch command|Configuration directory|
+|--------------------|-----------|----------|------------|
+| `nano-quickstart`  |1 CPU, 4GiB 
RAM|`bin/start-nano-quickstart`|`conf/druid/single-server/nano-quickstart`|
+| `micro-quickstart` |4 CPU, 16GiB 
RAM|`bin/start-micro-quickstart`|`conf/druid/single-server/micro-quickstart`|
+| `small`            |8 CPU, 64GiB RAM 
(~i3.2xlarge)|`bin/start-small`|`conf/druid/single-server/small`|
+| `medium`           |16 CPU, 128GiB RAM 
(~i3.4xlarge)|`bin/start-medium`|`conf/druid/single-server/medium`|
+| `large`            |32 CPU, 256GiB RAM 
(~i3.8xlarge)|`bin/start-large`|`conf/druid/single-server/large`|
+| `xlarge`           |64 CPU, 512GiB RAM 
(~i3.16xlarge)|`bin/start-xlarge`|`conf/druid/single-server/xlarge`|
 
 The `micro-quickstart` is sized for small machines like laptops and is 
intended for quick evaluation use-cases.
 
@@ -42,51 +55,3 @@ The `nano-quickstart` is an even smaller configuration, 
targeting a machine with
 The other configurations are intended for general use single-machine 
deployments. They are sized for hardware roughly based on Amazon's i3 series of 
EC2 instances.
 
 The startup scripts for these example configurations run a single ZK instance 
along with the Druid services. You can choose to deploy ZK separately as well.
-
-The example configurations run the Druid Coordinator and Overlord together in 
a single process using the optional configuration 
`druid.coordinator.asOverlord.enabled=true`, described in the [Coordinator 
configuration documentation](../configuration/index.md#coordinator-operation).
-
-While example configurations are provided for very large single machines, at 
higher scales we recommend running Druid in a [clustered 
deployment](../tutorials/cluster.md), for fault-tolerance and reduced resource 
contention.
-
-## Druid auto start
-
-Druid includes a launch script, `bin/start-druid` that automatically sets 
various memory-related parameters based on available processors and memory. It 
accepts optional arguments such as list of services, total memory and a config 
directory to override default JVM arguments and service-specific runtime 
properties.
-
-`start-druid` is a generic launch script capable of starting any set of Druid 
services on a server.
-It accepts optional arguments such as list of services, total memory and a 
config directory to override default JVM arguments and service-specific runtime 
properties.
-Druid services will use all processors and up to 80% memory on the system.
-For details about possible arguments, run `bin/start-druid --help`.
-
-The corresponding launch scripts (e.g. `start-micro-quickstart`) are now 
deprecated.
-
-
-## Single server reference configurations
-
-### Nano-Quickstart: 1 CPU, 4GiB RAM
-
-- Launch command: `bin/start-nano-quickstart`
-- Configuration directory: `conf/druid/single-server/nano-quickstart`
-
-### Micro-Quickstart: 4 CPU, 16GiB RAM
-
-- Launch command: `bin/start-micro-quickstart`
-- Configuration directory: `conf/druid/single-server/micro-quickstart`
-
-### Small: 8 CPU, 64GiB RAM (~i3.2xlarge)
-
-- Launch command: `bin/start-small`
-- Configuration directory: `conf/druid/single-server/small`
-
-### Medium: 16 CPU, 128GiB RAM (~i3.4xlarge)
-
-- Launch command: `bin/start-medium`
-- Configuration directory: `conf/druid/single-server/medium`
-
-### Large: 32 CPU, 256GiB RAM (~i3.8xlarge)
-
-- Launch command: `bin/start-large`
-- Configuration directory: `conf/druid/single-server/large`
-
-### X-Large: 64 CPU, 512GiB RAM (~i3.16xlarge)
-
-- Launch command: `bin/start-xlarge`
-- Configuration directory: `conf/druid/single-server/xlarge`
\ No newline at end of file
diff --git a/docs/tutorials/docker.md b/docs/tutorials/docker.md
index c5cb5f22cf..955a6e2f7b 100644
--- a/docs/tutorials/docker.md
+++ b/docs/tutorials/docker.md
@@ -35,7 +35,7 @@ This tutorial assumes you will download the required files 
from GitHub. The file
 
 ### Docker memory requirements
 
-The default `docker-compose.yml` launches eight containers: Zookeeper, 
PostgreSQL, and six Druid containers based upon the [micro quickstart 
configuration](../operations/single-server.html#micro-quickstart-4-cpu-16gib-ram).
+The default `docker-compose.yml` launches eight containers: Zookeeper, 
PostgreSQL, and six Druid containers based upon the [micro quickstart 
configuration](../operations/single-server.html#single-server-reference-configurations-deprecated).
 Each Druid service is configured to use up to 7 GiB of memory (6 GiB direct 
memory and 1 GiB heap). However, the quickstart will not use all the available 
memory.
 
 For this setup, Docker needs at least 6 GiB of memory available for the Druid 
cluster. For Docker Desktop on Mac OS, adjust the memory settings in the 
[Docker Desktop preferences](https://docs.docker.com/desktop/mac/). If you 
experience a crash with a 137 error code you likely don't have enough memory 
allocated to Docker.
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 8ec7468801..3c7067d395 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -36,11 +36,6 @@ Druid supports a variety of ingestion options. Once you're 
done with this tutori
 
 You can follow these steps on a relatively modest machine, such as a 
workstation or virtual server with 16 GiB of RAM.
 
-Druid comes equipped with launch scripts that can be used to start all 
processes on a single server. Here, we will use 
[`auto`](../operations/single-server.md#druid-auto-start), which automatically 
sets various runtime properties based on available processors and memory.
-
-In addition, Druid includes several [bundled non-automatic 
profiles](../operations/single-server.md) for a range of machine sizes. These 
range from nano (1 CPU, 4GiB RAM) to x-large (64 CPU, 512GiB RAM). 
-We won't use those here, but for more information, see [Single server 
deployment](../operations/single-server.md). For additional information on 
deploying Druid services across clustered machines, see [Clustered 
deployment](./cluster.md).
-
 The software requirements for the installation machine are:
 
 * Linux, Mac OS X, or other Unix-like OS. (Windows is not supported)
@@ -70,7 +65,7 @@ The distribution directory contains `LICENSE` and `NOTICE` 
files and subdirector
 
 ## Start up Druid services
 
-Start up Druid services using the `auto` single-machine configuration.
+Start up Druid services using the automatic single-machine configuration.
 This configuration includes default settings that are appropriate for this 
tutorial, such as loading the `druid-multi-stage-query` extension by default so 
that you can use the MSQ task engine.
 
 You can view that setting and others in the configuration files in the 
`conf/druid/auto`. 
diff --git a/docs/tutorials/tutorial-batch-hadoop.md 
b/docs/tutorials/tutorial-batch-hadoop.md
index 234e8426b0..dad431acf5 100644
--- a/docs/tutorials/tutorial-batch-hadoop.md
+++ b/docs/tutorials/tutorial-batch-hadoop.md
@@ -28,7 +28,7 @@ This tutorial shows you how to load data files into Apache 
Druid using a remote
 
 For this tutorial, we'll assume that you've already completed the previous
 [batch ingestion tutorial](tutorial-batch.md) using Druid's native batch 
ingestion system and are using the
-`auto` single-machine configuration as described in the 
[quickstart](../operations/single-server.md#druid-auto-start).
+automatic single-machine configuration as described in the 
[quickstart](../operations/single-server.md).
 
 ## Install Docker
 
diff --git a/docs/tutorials/tutorial-kafka.md b/docs/tutorials/tutorial-kafka.md
index a102db1806..285f831058 100644
--- a/docs/tutorials/tutorial-kafka.md
+++ b/docs/tutorials/tutorial-kafka.md
@@ -30,7 +30,7 @@ The tutorial guides you through the steps to load sample 
nested clickstream data
 
 ## Prerequisites
 
-Before you follow the steps in this tutorial, download Druid as described in 
the [quickstart](index.md) using the 
[auto](../operations/single-server.md#druid-auto-start) single-machine 
configuration and have it running on your local machine. You don't need to have 
loaded any data.
+Before you follow the steps in this tutorial, download Druid as described in 
the [quickstart](index.md) using the [automatic single-machine 
configuration](../operations/single-server.md) and have it running on your 
local machine. You don't need to have loaded any data.
 
 ## Download and start Kafka
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to