This is an automated email from the ASF dual-hosted git repository.

abhishek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new efbb58e90e docs: remove maxRowsPerSegment where appropriate (#12071)
efbb58e90e is described below

commit efbb58e90e5b1400f74c0eaa7ecb69e002176f17
Author: Charles Smith <[email protected]>
AuthorDate: Thu Jul 28 04:22:13 2022 -0700

    docs: remove maxRowsPerSegment where appropriate (#12071)
    
    * remove maxRowsPerSegment where appropriate
    
    * fix tutorial, accept suggestions
    
    * Update docs/design/coordinator.md
    
    * additional tutorial file
    
    * fix initial index spec
    
    * accept comments
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * add back comment on maxrows per segment
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * Update docs/tutorials/tutorial-compaction.md
    
    Co-authored-by: Victoria Lim <[email protected]>
    
    * rm duplicate entry
    
    * Update native-batch-simple-task.md
    
    remove ref to `maxrowspersegment`
    
    * Update native-batch.md
    
    remove ref to `maxrowspersegment`
    
    * final tenticles
    
    * Apply suggestions from code review
    
    Co-authored-by: Victoria Lim <[email protected]>
---
 docs/configuration/index.md                        |  1 -
 docs/design/coordinator.md                         |  2 +-
 docs/ingestion/compaction.md                       |  2 +-
 docs/ingestion/native-batch-simple-task.md         |  3 +-
 docs/ingestion/native-batch.md                     |  7 +-
 docs/tutorials/tutorial-batch.md                   |  4 +-
 docs/tutorials/tutorial-compaction.md              | 78 +++++++++++++---------
 docs/tutorials/tutorial-ingestion-spec.md          | 11 ++-
 docs/tutorials/tutorial-rollup.md                  |  4 +-
 docs/tutorials/tutorial-transform-spec.md          |  4 +-
 .../tutorial/compaction-day-granularity.json       |  4 +-
 .../quickstart/tutorial/compaction-init-index.json |  5 +-
 .../tutorial/compaction-keep-granularity.json      |  4 +-
 13 files changed, 80 insertions(+), 49 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 50ff571cf0..2246727ad2 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -968,7 +968,6 @@ You can configure automatic compaction through the 
following properties:
 |`dataSource`|dataSource name to be compacted.|yes|
 |`taskPriority`|[Priority](../ingestion/tasks.md#priority) of compaction 
task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per 
compaction task. Since a time chunk must be processed in its entirety, if the 
segments for a particular time chunk have a total size in bytes greater than 
this parameter, compaction will not run for that time chunk. Because each 
compaction task runs with a single thread, setting this value too far above 
1–2GB will result in compaction tasks taking an excessive amount of time.|no 
(default = 100,000,000,000,000 i. [...]
-|`maxRowsPerSegment`|Max number of rows per segment after compaction.|no|
 |`skipOffsetFromLatest`|The offset for searching segments to be compacted in 
[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Strongly 
recommended to set for realtime dataSources. See [Data handling with 
compaction](../ingestion/compaction.md#data-handling-with-compaction).|no 
(default = "P1D")|
 |`tuningConfig`|Tuning config for compaction tasks. See below [Automatic 
compaction tuningConfig](#automatic-compaction-tuningconfig).|no|
 |`taskContext`|[Task context](../ingestion/tasks.md#context) for compaction 
tasks.|no|
diff --git a/docs/design/coordinator.md b/docs/design/coordinator.md
index 5ce849df75..8d0a5d18e6 100644
--- a/docs/design/coordinator.md
+++ b/docs/design/coordinator.md
@@ -113,7 +113,7 @@ need compaction.
 A set of segments needs compaction if all conditions below are satisfied:
 
 1) Total size of segments in the time chunk is smaller than or equal to the 
configured `inputSegmentSizeBytes`.
-2) Segments have never been compacted yet or compaction spec has been updated 
since the last compaction, especially `maxRowsPerSegment`, `maxTotalRows`, and 
`indexSpec`.
+2) Segments have never been compacted yet or compaction spec has been updated 
since the last compaction: `maxTotalRows` or `indexSpec`.
 
 Here are some details with an example. Suppose we have two dataSources (`foo`, 
`bar`) as seen below:
 
diff --git a/docs/ingestion/compaction.md b/docs/ingestion/compaction.md
index b131ad1dc8..d366556fc7 100644
--- a/docs/ingestion/compaction.md
+++ b/docs/ingestion/compaction.md
@@ -132,7 +132,7 @@ To perform a manual compaction, you submit a compaction 
task. Compaction tasks m
 
 > Note: Use `granularitySpec` over `segmentGranularity` and only set one of 
 > these values. If you specify different values for these in the same 
 > compaction spec, the task fails.
 
-To control the number of result segments per time chunk, you can set 
[`maxRowsPerSegment`](../configuration/index.md#automatic-compaction-dynamic-configuration)
 or [`numShards`](../ingestion/native-batch.md#tuningconfig).
+To control the number of result segments per time chunk, you can set 
[`maxRowsPerSegment`](./native-batch.md#partitionsspec) or 
[`numShards`](../ingestion/native-batch.md#tuningconfig).
 
 > You can run multiple compaction tasks in parallel. For example, if you want 
 > to compact the data for a year, you are not limited to running a single task 
 > for the entire year. You can run 12 compaction tasks with month-long 
 > intervals.
 
diff --git a/docs/ingestion/native-batch-simple-task.md 
b/docs/ingestion/native-batch-simple-task.md
index bb931a50ed..c7b37449f9 100644
--- a/docs/ingestion/native-batch-simple-task.md
+++ b/docs/ingestion/native-batch-simple-task.md
@@ -131,11 +131,10 @@ The tuningConfig is optional and default parameters will 
be used if no tuningCon
 |property|description|default|required?|
 |--------|-----------|-------|---------|
 |type|The task type, this should always be "index".|none|yes|
-|maxRowsPerSegment|Deprecated. Use `partitionsSpec` instead. Used in sharding. 
Determines how many rows are in each segment.|5000000|no|
 |maxRowsInMemory|Used in determining when intermediate persists to disk should 
occur. Normally user does not need to set this, but depending on the nature of 
data, if rows are short in terms of bytes, user may not want to store a million 
rows in memory and this value should be set.|1000000|no|
 |maxBytesInMemory|Used in determining when intermediate persists to disk 
should occur. Normally this is computed internally and user does not need to 
set it. This value represents number of bytes to aggregate in heap memory 
before persisting. This is based on a rough estimate of memory usage and not 
actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * 
(2 + maxPendingPersists). Note that `maxBytesInMemory` also includes heap usage 
of artifacts created from interm [...]
 |maxTotalRows|Deprecated. Use `partitionsSpec` instead. Total number of rows 
in segments waiting for being pushed. Used in determining when intermediate 
pushing should occur.|20000000|no|
-|numShards|Deprecated. Use `partitionsSpec` instead. Directly specify the 
number of shards to create. If this is specified and `intervals` is specified 
in the `granularitySpec`, the index task can skip the determine 
intervals/partitions pass through the data. `numShards` cannot be specified if 
`maxRowsPerSegment` is set.|null|no|
+|numShards|Deprecated. Use `partitionsSpec` instead. Directly specify the 
number of shards to create. If this is specified and `intervals` is specified 
in the `granularitySpec`, the index task can skip the determine 
intervals/partitions pass through the data.|null|no|
 |partitionDimensions|Deprecated. Use `partitionsSpec` instead. The dimensions 
to partition on. Leave blank to select all dimensions. Only used with 
`forceGuaranteedRollup` = true, will be ignored otherwise.|null|no|
 |partitionsSpec|Defines how to partition data in each timeChunk, see 
[PartitionsSpec](#partitionsspec)|`dynamic` if `forceGuaranteedRollup` = false, 
`hashed` if `forceGuaranteedRollup` = true|no|
 |indexSpec|Defines segment storage format options to be used at indexing time, 
see [IndexSpec](ingestion-spec.md#indexspec)|null|no|
diff --git a/docs/ingestion/native-batch.md b/docs/ingestion/native-batch.md
index 1ecba43741..aba390b228 100644
--- a/docs/ingestion/native-batch.md
+++ b/docs/ingestion/native-batch.md
@@ -222,12 +222,11 @@ The tuningConfig is optional and default parameters will 
be used if no tuningCon
 |property|description|default|required?|
 |--------|-----------|-------|---------|
 |type|The task type. Set the value to`index_parallel`.|none|yes|
-|maxRowsPerSegment|Deprecated. Use `partitionsSpec` instead. Used in sharding. 
Determines how many rows are in each segment.|5000000|no|
 |maxRowsInMemory|Determines when Druid should perform intermediate persists to 
disk. Normally you do not need to set this. Depending on the nature of your 
data, if rows are short in terms of bytes. For example, you may not want to 
store a million rows in memory. In this case, set this value.|1000000|no|
 |maxBytesInMemory|Use to determine when Druid should perform intermediate 
persists to disk. Normally Druid computes this internally and you do not need 
to set it. This value represents number of bytes to aggregate in heap memory 
before persisting. This is based on a rough estimate of memory usage and not 
actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * 
(2 + maxPendingPersists). Note that `maxBytesInMemory` also includes heap usage 
of artifacts created from i [...]
 |maxColumnsToMerge|Limit of the number of segments to merge in a single phase 
when merging segments for publishing. This limit affects the total number of 
columns present in a set of segments to merge. If the limit is exceeded, 
segment merging occurs in multiple phases. Druid merges at least 2 segments per 
phase, regardless of this setting.|-1 (unlimited)|no|
 |maxTotalRows|Deprecated. Use `partitionsSpec` instead. Total number of rows 
in segments waiting to be pushed. Used to determine when intermediate pushing 
should occur.|20000000|no|
-|numShards|Deprecated. Use `partitionsSpec` instead. Directly specify the 
number of shards to create when using a `hashed` `partitionsSpec`. If this is 
specified and `intervals` is specified in the `granularitySpec`, the index task 
can skip the determine intervals/partitions pass through the data. `numShards` 
cannot be specified if `maxRowsPerSegment` is set.|null|no|
+|numShards|Deprecated. Use `partitionsSpec` instead. Directly specify the 
number of shards to create when using a `hashed` `partitionsSpec`. If this is 
specified and `intervals` is specified in the `granularitySpec`, the index task 
can skip the determine intervals/partitions pass through the data.|null|no|
 |splitHintSpec|Hint to control the amount of data that each first phase task 
reads. Druid may ignore the hint depending on the implementation of the input 
source. See [Split hint spec](#split-hint-spec) for more details.|size-based 
split hint spec|no|
 |partitionsSpec|Defines how to partition data in each timeChunk, see 
[PartitionsSpec](#partitionsspec)|`dynamic` if `forceGuaranteedRollup` = false, 
`hashed` or `single_dim` if `forceGuaranteedRollup` = true|no|
 |indexSpec|Defines segment storage format options to be used at indexing time, 
see [IndexSpec](ingestion-spec.md#indexspec)|null|no|
@@ -615,7 +614,9 @@ An example of the result is
       },
       "tuningConfig": {
         "type": "index_parallel",
-        "maxRowsPerSegment": 5000000,
+        "partitionsSpec": {
+          "type": "dynamic"
+        },
         "maxRowsInMemory": 1000000,
         "maxTotalRows": 20000000,
         "numShards": null,
diff --git a/docs/tutorials/tutorial-batch.md b/docs/tutorials/tutorial-batch.md
index 6f9a4d9404..25a3380bf5 100644
--- a/docs/tutorials/tutorial-batch.md
+++ b/docs/tutorials/tutorial-batch.md
@@ -96,7 +96,9 @@ which has been configured to read the 
`quickstart/tutorial/wikiticker-2015-09-12
     },
     "tuningConfig" : {
       "type" : "index_parallel",
-      "maxRowsPerSegment" : 5000000,
+      "partitionsSpec": {
+        "type": "dynamic"
+      },
       "maxRowsInMemory" : 25000
     }
   }
diff --git a/docs/tutorials/tutorial-compaction.md 
b/docs/tutorials/tutorial-compaction.md
index 94cff21ae3..84bf38f44a 100644
--- a/docs/tutorials/tutorial-compaction.md
+++ b/docs/tutorials/tutorial-compaction.md
@@ -24,40 +24,46 @@ sidebar_label: "Compacting segments"
   -->
 
 
-This tutorial demonstrates how to compact existing segments into fewer but 
larger segments.
+This tutorial demonstrates how to compact existing segments into fewer but 
larger segments in Apache Druid.
 
-Because there is some per-segment memory and processing overhead, it can 
sometimes be beneficial to reduce the total number of segments.
-Please check [Segment size 
optimization](../operations/segment-optimization.md) for details.
+There is some per-segment memory and processing overhead during query 
processing.
+Therefore, it can be beneficial to reduce the total number of segments.
+See [Segment size optimization](../operations/segment-optimization.md) for 
details.
 
-For this tutorial, we'll assume you've already downloaded Apache Druid as 
described in
+## Prerequisites
+
+This tutorial assumes you have already downloaded Apache Druid as described in
 the [single-machine quickstart](index.md) and have it running on your local 
machine.
 
-It will also be helpful to have finished [Tutorial: Loading a 
file](../tutorials/tutorial-batch.md) and [Tutorial: Querying 
data](../tutorials/tutorial-query.md).
+If you haven't already, you should finish the following tutorials first:
+- [Tutorial: Loading a file](../tutorials/tutorial-batch.md)
+- [Tutorial: Querying data](../tutorials/tutorial-query.md)
 
 ## Load the initial data
 
-For this tutorial, we'll be using the Wikipedia edits sample data, with an 
ingestion task spec that will create 1-3 segments per hour in the input data.
+This tutorial uses the Wikipedia edits sample data included with the Druid 
distribution.
+To load the initial data, you use an ingestion spec that loads batch data with 
segment granularity of `HOUR` and creates between one and three segments per 
hour.
 
-The ingestion spec can be found at 
`quickstart/tutorial/compaction-init-index.json`. Let's submit that spec, which 
will create a datasource called `compaction-tutorial`:
+You can review the ingestion spec at 
`quickstart/tutorial/compaction-init-index.json`.
+Submit the spec as follows to create a datasource called `compaction-tutorial`:
 
 ```bash
 bin/post-index-task --file quickstart/tutorial/compaction-init-index.json 
--url http://localhost:8081
 ```
 
-> Please note that `maxRowsPerSegment` in the ingestion spec is set to 1000. 
This is to generate multiple segments per hour and _NOT_ recommended in 
production.
-> It's 5000000 by default and may need to be adjusted to make your segments 
optimized.
+> `maxRowsPerSegment` in the tutorial ingestion spec is set to 1000 to 
generate multiple segments per hour for demonstration purposes. Do not use this 
spec in production.
 
-After the ingestion completes, go to 
[http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources)
 in a browser to see the new datasource in the Druid console.
+After the ingestion completes, navigate to 
[http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources)
 in a browser to see the new datasource in the Druid console.
 
 ![compaction-tutorial datasource](../assets/tutorial-compaction-01.png 
"compaction-tutorial datasource")
 
-Click the `51 segments` link next to "Fully Available" for the 
`compaction-tutorial` datasource to view information about the datasource's 
segments:
+In the **Availability** column for the `compaction-tutorial` datasource, click 
the link for **51 segments** to view segments information for the datasource.
 
-There will be 51 segments for this datasource, 1-3 segments per hour in the 
input data:
+The datasource comprises 51 segments, between one and three segments per hour 
from the input data:
 
 ![Original segments](../assets/tutorial-compaction-02.png "Original segments")
 
-Running a COUNT(*) query on this datasource shows that there are 39,244 rows:
+Run a COUNT query on the datasource to verify there are 39,244 rows:
 
 ```bash
 dsql> select count(*) from "compaction-tutorial";
@@ -71,9 +77,8 @@ Retrieved 1 row in 1.38s.
 
 ## Compact the data
 
-Let's now compact these 51 small segments.
-
-We have included a compaction task spec for this tutorial datasource at 
`quickstart/tutorial/compaction-keep-granularity.json`:
+Now you compact these 51 small segments and retain the segment granularity of 
`HOUR`.
+The Druid distribution includes a compaction task spec for this tutorial 
datasource at `quickstart/tutorial/compaction-keep-granularity.json`:
 
 ```json
 {
@@ -82,19 +87,21 @@ We have included a compaction task spec for this tutorial 
datasource at `quickst
   "interval": "2015-09-12/2015-09-13",
   "tuningConfig" : {
     "type" : "index_parallel",
-    "maxRowsPerSegment" : 5000000,
+    "partitionsSpec": {
+        "type": "dynamic"
+    },
     "maxRowsInMemory" : 25000
   }
 }
 ```
 
-This will compact all segments for the interval `2015-09-12/2015-09-13` in the 
`compaction-tutorial` datasource.
+This compacts all segments for the interval `2015-09-12/2015-09-13` in the 
`compaction-tutorial` datasource.
 
 The parameters in the `tuningConfig` control the maximum number of rows 
present in each compacted segment and thus affect the number of segments in the 
compacted set.
 
-In this tutorial example, only one compacted segment will be created per hour, 
as each hour has less rows than the 5000000 `maxRowsPerSegment` (note that the 
total number of rows is 39244).
+This datasource only has 39,244 rows. 39,244 is below the default limit of 
5,000,000 `maxRowsPerSegment` for [dynamic 
partitioning](../ingestion/native-batch.md#dynamic-partitioning). Therefore, 
Druid only creates one compacted segment per hour.
 
-Let's submit this task now:
+Submit the compaction task now:
 
 ```bash
 bin/post-index-task --file 
quickstart/tutorial/compaction-keep-granularity.json --url http://localhost:8081
@@ -102,17 +109,19 @@ bin/post-index-task --file 
quickstart/tutorial/compaction-keep-granularity.json
 
 After the task finishes, refresh the [segments 
view](http://localhost:8888/unified-console.html#segments).
 
-The original 51 segments will eventually be marked as "unused" by the 
Coordinator and removed, with the new compacted segments remaining.
+Over time the Coordinator marks the original 51 segments as unused and 
subsequently removes them to leave only the new compacted segments.
 
-By default, the Druid Coordinator will not mark segments as unused until the 
Coordinator process has been up for at least 15 minutes, so you may see the old 
segment set and the new compacted set at the same time in the Druid console, 
with 75 total segments:
+By default, the Coordinator does not mark segments as unused until the 
Coordinator has been running for at least 15 minutes.
+During that time, you may see 75 total segments comprised of the old segment 
set and the new compacted set:
 
 ![Compacted segments intermediate state 
1](../assets/tutorial-compaction-03.png "Compacted segments intermediate state 
1")
 
 ![Compacted segments intermediate state 
2](../assets/tutorial-compaction-04.png "Compacted segments intermediate state 
2")
 
-The new compacted segments have a more recent version than the original 
segments, so even when both sets of segments are shown in the Druid console, 
queries will only read from the new compacted segments.
+The new compacted segments have a more recent version than the original 
segments.
+Even though the Druid console displays both sets of segments, queries only 
read from the new compacted segments.
 
-Let's try running a COUNT(*) on `compaction-tutorial` again, where the row 
count should still be 39,244:
+Run a COUNT query on `compaction-tutorial` again to verify the number of rows 
remains 39,244:
 
 ```bash
 dsql> select count(*) from "compaction-tutorial";
@@ -124,7 +133,7 @@ dsql> select count(*) from "compaction-tutorial";
 Retrieved 1 row in 1.30s.
 ```
 
-After the Coordinator has been running for at least 15 minutes, the [segments 
view](http://localhost:8888/unified-console.html#segments) should show there 
are 24 segments, one per hour:
+After the Coordinator has been running for at least 15 minutes, the segments 
view only shows the new 24 segments, one for each hour:
 
 ![Compacted segments hourly granularity 
1](../assets/tutorial-compaction-05.png "Compacted segments hourly granularity 
1")
 
@@ -132,9 +141,9 @@ After the Coordinator has been running for at least 15 
minutes, the [segments vi
 
 ## Compact the data with new segment granularity
 
-The compaction task can also produce compacted segments with a granularity 
different from the granularity of the input segments.
+You can also change the segment granularity in a compaction task to produce 
compacted segments with a different granularity from that of the input segments.
 
-We have included a compaction task spec that will create DAY granularity 
segments at `quickstart/tutorial/compaction-day-granularity.json`:
+The Druid distribution includes a compaction task spec to create `DAY` 
granularity segments at `quickstart/tutorial/compaction-day-granularity.json`:
 
 ```json
 {
@@ -143,7 +152,9 @@ We have included a compaction task spec that will create 
DAY granularity segment
   "interval": "2015-09-12/2015-09-13",
   "tuningConfig" : {
     "type" : "index_parallel",
-    "maxRowsPerSegment" : 5000000,
+    "partitionsSpec": {
+        "type": "dynamic"
+    },
     "maxRowsInMemory" : 25000,
     "forceExtendableShardSpecs" : true
   },
@@ -156,21 +167,22 @@ We have included a compaction task spec that will create 
DAY granularity segment
 
 Note that `segmentGranularity` is set to `DAY` in this compaction task spec.
 
-Let's submit this task now:
+Submit this task now:
 
 ```bash
 bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json 
--url http://localhost:8081
 ```
 
-It will take a bit of time before the Coordinator marks the old input segments 
as unused, so you may see an intermediate state with 25 total segments. 
Eventually, there will only be one DAY granularity segment:
+It takes some time before the Coordinator marks the old input segments as 
unused, so you may see an intermediate state with 25 total segments. 
Eventually, only one DAY granularity segment remains:
 
 ![Compacted segments day granularity 1](../assets/tutorial-compaction-07.png 
"Compacted segments day granularity 1")
 
 ![Compacted segments day granularity 2](../assets/tutorial-compaction-08.png 
"Compacted segments day granularity 2")
 
+## Learn more
 
-## Further reading
+This tutorial demonstrated how to use a compaction task spec to manually 
compact segments and how to optionally change the segment granularity for 
segments.
 
-[Task documentation](../ingestion/tasks.md)
 
-[Segment optimization](../operations/segment-optimization.md)
+- For more details, see [Compaction](../ingestion/compaction.md).
+- To learn about the benefits of compaction, see [Segment 
optimization](../operations/segment-optimization.md).
diff --git a/docs/tutorials/tutorial-ingestion-spec.md 
b/docs/tutorials/tutorial-ingestion-spec.md
index 13a0b8700d..d4360a8db1 100644
--- a/docs/tutorials/tutorial-ingestion-spec.md
+++ b/docs/tutorials/tutorial-ingestion-spec.md
@@ -511,8 +511,12 @@ As an example, let's add a `tuningConfig` that sets a 
target segment size for th
 ```json
     "tuningConfig" : {
       "type" : "index_parallel",
-      "maxRowsPerSegment" : 5000000
+      "partitionsSpec": {
+        "type": "dynamic",
+         "maxRowsPerSegment" : 5000000
+      }
     }
+         
 ```
 
 Note that each ingestion task has its own type of `tuningConfig`.
@@ -567,7 +571,10 @@ We've finished defining the ingestion spec, it should now 
look like the followin
     },
     "tuningConfig" : {
       "type" : "index_parallel",
-      "maxRowsPerSegment" : 5000000
+      "partitionsSpec": {
+        "type": "dynamic",
+         "maxRowsPerSegment" : 5000000
+      }
     }
   }
 }
diff --git a/docs/tutorials/tutorial-rollup.md 
b/docs/tutorials/tutorial-rollup.md
index 19f68327db..5081aa8f5e 100644
--- a/docs/tutorials/tutorial-rollup.md
+++ b/docs/tutorials/tutorial-rollup.md
@@ -96,7 +96,9 @@ We'll ingest this data using the following ingestion task 
spec, located at `quic
     },
     "tuningConfig" : {
       "type" : "index_parallel",
-      "maxRowsPerSegment" : 5000000,
+      "partitionsSpec": {
+        "type": "dynamic"
+      },
       "maxRowsInMemory" : 25000
     }
   }
diff --git a/docs/tutorials/tutorial-transform-spec.md 
b/docs/tutorials/tutorial-transform-spec.md
index 356efc597e..cbcf1d7166 100644
--- a/docs/tutorials/tutorial-transform-spec.md
+++ b/docs/tutorials/tutorial-transform-spec.md
@@ -111,7 +111,9 @@ We will ingest the sample data using the following spec, 
which demonstrates the
     },
     "tuningConfig" : {
       "type" : "index_parallel",
-      "maxRowsPerSegment" : 5000000,
+      "partitionsSpec": {
+        "type": "dynamic"
+      },
       "maxRowsInMemory" : 25000
     }
   }
diff --git a/examples/quickstart/tutorial/compaction-day-granularity.json 
b/examples/quickstart/tutorial/compaction-day-granularity.json
index 0314edbca3..e1f9162a01 100644
--- a/examples/quickstart/tutorial/compaction-day-granularity.json
+++ b/examples/quickstart/tutorial/compaction-day-granularity.json
@@ -4,7 +4,9 @@
   "interval": "2015-09-12/2015-09-13",
   "tuningConfig" : {
     "type" : "index_parallel",
-    "maxRowsPerSegment" : 5000000,
+    "partitionsSpec": {
+        "type": "dynamic"
+    },
     "maxRowsInMemory" : 25000,
     "forceExtendableShardSpecs" : true
   },
diff --git a/examples/quickstart/tutorial/compaction-init-index.json 
b/examples/quickstart/tutorial/compaction-init-index.json
index f2c00481c3..43124887b2 100644
--- a/examples/quickstart/tutorial/compaction-init-index.json
+++ b/examples/quickstart/tutorial/compaction-init-index.json
@@ -53,7 +53,10 @@
     },
     "tuningConfig" : {
       "type" : "index_parallel",
-      "maxRowsPerSegment" : 1000
+      "partitionsSpec": {
+        "type": "dynamic",
+        "maxRowsPerSegment" : 1000
+    }
     }
   }
 }
diff --git a/examples/quickstart/tutorial/compaction-keep-granularity.json 
b/examples/quickstart/tutorial/compaction-keep-granularity.json
index ba76d612bd..ee5ad921d1 100644
--- a/examples/quickstart/tutorial/compaction-keep-granularity.json
+++ b/examples/quickstart/tutorial/compaction-keep-granularity.json
@@ -4,7 +4,9 @@
   "interval": "2015-09-12/2015-09-13",
   "tuningConfig" : {
     "type" : "index_parallel",
-    "maxRowsPerSegment" : 5000000,
+    "partitionsSpec": {
+        "type": "dynamic"
+    },
     "maxRowsInMemory" : 25000
   }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to