kfaraz commented on code in PR #18231:
URL: https://github.com/apache/druid/pull/18231#discussion_r2218545244


##########
docs/release-info/release-notes.md:
##########
@@ -57,63 +57,308 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+#### Improved HTTP endpoints
+
+You can now use raw SQL in the HTTP body for `/druid/v2/sql` endpoints. You 
can set `Content-Type` to `text/plain` instead of `application/json`, so you 
can provide raw text that isn't escaped. 
+
+[#17937](https://github.com/apache/druid/pull/17937)
+
+Additionally, SQL requests can now include multiple SET statements to build up 
context for the final statement. For example, the following query results in a 
statement that includes the `timeout`, `useCache`, `populateCache`, and 
`vectorize` query context parameters: 
+
+```sql
+SET timeout = 20000;
+SET useCache = false;
+SET populateCache = false;
+SET vectorize = 'force';
+SELECT "channel", "page", sum("added") from "wikipedia" GROUP BY 1, 2
+```
+
+This improvement also works for INSERT and REPLACE queries using the MSQ task 
engine. Note that JDBC isn't supported.
+
+[#17974](https://github.com/apache/druid/pull/17974)
+### Cloning Historicals
+
+You can now configure clones for Historicals using the dynamic Coordinator 
configuration `cloneServers`. Cloned Historicals are useful for situations such 
as rolling updates where you want to launch a new Historical as a replacement 
for an existing one.
+
+Set the config to a map from the target Historical server to the source 
Historical:
+
+```
+  "cloneServers": {"historicalClone":"historicalOriginal"}
+```
+
+The clone doesn't participate in regular segment assignment or balancing. 
Instead, the Coordinator mirrors any segment assignment made to the original 
Historical onto the clone, so that the clone becomes an exact copy of the 
source. Segments on the clone Historical do not count towards replica counts 
either. If the original Historical disappears, the clone remains in the last 
known state of the source server until removed from the `cloneServers` config.
+
+When you query your data using the native query engine, you can prefer 
(`preferClones`), exclude (`excludeClones`), or include (`includeClones`) 
clones by setting the query context parameter `cloneQueryMode`. By default, 
clones are excluded.
+
+As part of this change, new Coordinator APIs are available. For more 
information, see [Coordinator APIs for clones](#coordinator-apis-for-clones).
+
+[#17863](https://github.com/apache/druid/pull/17863) 
[#17899](https://github.com/apache/druid/pull/17899) 
[#17956](https://github.com/apache/druid/pull/17956) 
+### Overlord kill tasks

Review Comment:
   ```suggestion
   ### Embedded kill tasks on the Overlord
   ```



##########
docs/release-info/release-notes.md:
##########
@@ -57,63 +57,308 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+#### Improved HTTP endpoints
+
+You can now use raw SQL in the HTTP body for `/druid/v2/sql` endpoints. You 
can set `Content-Type` to `text/plain` instead of `application/json`, so you 
can provide raw text that isn't escaped. 
+
+[#17937](https://github.com/apache/druid/pull/17937)
+
+Additionally, SQL requests can now include multiple SET statements to build up 
context for the final statement. For example, the following query results in a 
statement that includes the `timeout`, `useCache`, `populateCache`, and 
`vectorize` query context parameters: 
+
+```sql
+SET timeout = 20000;
+SET useCache = false;
+SET populateCache = false;
+SET vectorize = 'force';
+SELECT "channel", "page", sum("added") from "wikipedia" GROUP BY 1, 2
+```
+
+This improvement also works for INSERT and REPLACE queries using the MSQ task 
engine. Note that JDBC isn't supported.
+
+[#17974](https://github.com/apache/druid/pull/17974)
+### Cloning Historicals
+
+You can now configure clones for Historicals using the dynamic Coordinator 
configuration `cloneServers`. Cloned Historicals are useful for situations such 
as rolling updates where you want to launch a new Historical as a replacement 
for an existing one.
+
+Set the config to a map from the target Historical server to the source 
Historical:
+
+```
+  "cloneServers": {"historicalClone":"historicalOriginal"}
+```
+
+The clone doesn't participate in regular segment assignment or balancing. 
Instead, the Coordinator mirrors any segment assignment made to the original 
Historical onto the clone, so that the clone becomes an exact copy of the 
source. Segments on the clone Historical do not count towards replica counts 
either. If the original Historical disappears, the clone remains in the last 
known state of the source server until removed from the `cloneServers` config.
+
+When you query your data using the native query engine, you can prefer 
(`preferClones`), exclude (`excludeClones`), or include (`includeClones`) 
clones by setting the query context parameter `cloneQueryMode`. By default, 
clones are excluded.
+
+As part of this change, new Coordinator APIs are available. For more 
information, see [Coordinator APIs for clones](#coordinator-apis-for-clones).
+
+[#17863](https://github.com/apache/druid/pull/17863) 
[#17899](https://github.com/apache/druid/pull/17899) 
[#17956](https://github.com/apache/druid/pull/17956) 
+### Overlord kill tasks
+
+You can now run kill tasks directly on the Overlord itself. Running kill tasks 
on the Overlord provides the following benefits:

Review Comment:
   ```suggestion
   You can now run kill tasks directly on the Overlord itself. Embedded kill 
tasks provide several benefits as they:
   ```



##########
docs/release-info/release-notes.md:
##########
@@ -57,63 +57,308 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+#### Improved HTTP endpoints
+
+You can now use raw SQL in the HTTP body for `/druid/v2/sql` endpoints. You 
can set `Content-Type` to `text/plain` instead of `application/json`, so you 
can provide raw text that isn't escaped. 
+
+[#17937](https://github.com/apache/druid/pull/17937)
+
+Additionally, SQL requests can now include multiple SET statements to build up 
context for the final statement. For example, the following query results in a 
statement that includes the `timeout`, `useCache`, `populateCache`, and 
`vectorize` query context parameters: 
+
+```sql
+SET timeout = 20000;
+SET useCache = false;
+SET populateCache = false;
+SET vectorize = 'force';
+SELECT "channel", "page", sum("added") from "wikipedia" GROUP BY 1, 2
+```
+
+This improvement also works for INSERT and REPLACE queries using the MSQ task 
engine. Note that JDBC isn't supported.
+
+[#17974](https://github.com/apache/druid/pull/17974)
+### Cloning Historicals
+
+You can now configure clones for Historicals using the dynamic Coordinator 
configuration `cloneServers`. Cloned Historicals are useful for situations such 
as rolling updates where you want to launch a new Historical as a replacement 
for an existing one.
+
+Set the config to a map from the target Historical server to the source 
Historical:
+
+```
+  "cloneServers": {"historicalClone":"historicalOriginal"}
+```
+
+The clone doesn't participate in regular segment assignment or balancing. 
Instead, the Coordinator mirrors any segment assignment made to the original 
Historical onto the clone, so that the clone becomes an exact copy of the 
source. Segments on the clone Historical do not count towards replica counts 
either. If the original Historical disappears, the clone remains in the last 
known state of the source server until removed from the `cloneServers` config.
+
+When you query your data using the native query engine, you can prefer 
(`preferClones`), exclude (`excludeClones`), or include (`includeClones`) 
clones by setting the query context parameter `cloneQueryMode`. By default, 
clones are excluded.
+
+As part of this change, new Coordinator APIs are available. For more 
information, see [Coordinator APIs for clones](#coordinator-apis-for-clones).
+
+[#17863](https://github.com/apache/druid/pull/17863) 
[#17899](https://github.com/apache/druid/pull/17899) 
[#17956](https://github.com/apache/druid/pull/17956) 
+### Overlord kill tasks
+
+You can now run kill tasks directly on the Overlord itself. Running kill tasks 
on the Overlord provides the following benefits:
+
+- Unused segments are killed as soon as they're eligible and are killed faster
+- Doesn't require a task slot
+- Locked intervals are automatically skipped
+- Configuration is simpler
+- A large number of unused segments doesn't cause issues for them
+
+This feature is controlled by the following configs:
+
+- `druid.manager.segments.killUnused.enabled` - Whether the feature is enabled 
or not
+- `druid.manager.segments.killUnused.bufferPeriod` - The amount of time that a 
segment must be unused before it is able to be permanently removed from 
metadata and deep storage. This can serve as a buffer period to prevent data 
loss if data ends up being needed after being marked unused.
+
+As part of this feature, [new metrics](#overlord-kill-task-metrics) have been 
added.
+
+[#18028](https://github.com/apache/druid/pull/18028)
+
+### Preferred tier selection 
+You can now configure the Broker service to prefer  Historicals on a specific 
tier. This can help ensure Druid executes queries within the same availability 
zone if you have Druid deployed across multiple availability zones.
+
+[#18136](https://github.com/apache/druid/pull/18136)
+
+### Dart improvements NEED TO WRITE
+
+Dart specific endpoints have been removed and folded into SqlResource. 
[#18003](https://github.com/apache/druid/pull/18003)
+Added a new engine QueryContext parameter. The value can be native or 
msq-dart. The value determines the engine used to run the query. The default 
value is native. [#18003](https://github.com/apache/druid/pull/18003)
+
+MSQ Dart is now able to query real-time tasks by setting the query context 
parameter includeSegmentSource to realtime, in a similar way to MSQ tasks. 
[#18076](https://github.com/apache/druid/pull/18076)
+
+### `SegmentMetadataCache` on the Coordinator
+
+[#17996](https://github.com/apache/druid/pull/17996) 
[#17935](https://github.com/apache/druid/pull/17935)
+
 ## Functional area and related changes
 
 This section contains detailed release notes separated by areas.
 
 ### Web console
 
+#### SET statements
+
+The web console supports using SET statements to specify query context 
parameters. For example, if you include `SET timeout = 20000;` in your query, 
the timeout query context parameter is set. 
+
+[#17966](https://github.com/apache/druid/pull/17966)
+
 #### Other web console improvements
 
+- You can now assign tiered replications to tiers that aren't currently online 
[#18050](https://github.com/apache/druid/pull/18050)
+- You can now filter tasks by the error in the Task view 
[#18057](https://github.com/apache/druid/pull/18057)
+- Improved SQL autocomplete and added JSON autocomplete 
[#18126](https://github.com/apache/druid/pull/18126)
+- Updated the web console to use the Overlord APIs instead of Coordinator APIs 
when managing segments, such as marking them as unused 
[#18172](https://github.com/apache/druid/pull/18172)
+
 ### Ingestion
 
+- Improved concurrency for batch and streaming ingestion tasks 
[#17828](https://github.com/apache/druid/pull/17828)
+- Removed the `useMaxMemoryEstimates` config. When set to false, Druid used a 
much more accurate memory estimate that was introduced in Druid 0.23.0. That 
more accurate method is the only available method now. The config has defaulted 
to false for several releases 
[#17936](https://github.com/apache/druid/pull/17936)
+
 #### SQL-based ingestion
 
 ##### Other SQL-based ingestion improvements
 
 #### Streaming ingestion
 
+##### Multi-stream supervisors (experimental)
+
+You can now use more than one supervisor to ingest data into the same 
datasource. Include the `spec.dataSchema.dataSource` field to help identify the 
supervisor.
+
+When using this feature, make sure you set `useConcurrentLocks` to `true` for 
the `context` field in the supervisor spec.
+
+[#18149](https://github.com/apache/druid/pull/18149) 
[#18082](https://github.com/apache/druid/pull/18082)
+
+##### Supervisors and the underlying input stream
+
+Seekable stream supervisors (Kafka, Kinesis, and Rabbit) can no longer update 
the underlying input stream (such as a topic for Kafka) that is persisted for 
it. This action was previously allowed by the API, but it isn't fully supported 
by the underlying system. Going forward, a request to make such a change 
results in a 400 error from the Supervisor API that explains why it isn't 
allowed. 
+
+[#17955](https://github.com/apache/druid/pull/17955) 
[#17975](https://github.com/apache/druid/pull/17975)
+
 ##### Other streaming ingestion improvements
 
+- Improved streaming ingestion so that it automatically determine the maximum 
number of columns to merge [#17917](https://github.com/apache/druid/pull/17917)
+
+
 ### Querying
 
+#### Metadata query for segments
+
+You can use a segment metadata query to find the list of projections attached 
to a segment.
+
+[#18119](https://github.com/apache/druid/pull/18119)
+
+
 #### Other querying improvements
 
+- You can now perform big decimal aggregations using the MSQ task engine 
[#18164](https://github.com/apache/druid/pull/18164)
+- Changed `MV_OVERLAP` and `MV_CONTAINS` functions now aligns more closely 
with the native `inType` filter 
[#18084](https://github.com/apache/druid/pull/18084)
+- Improved query handling when some segments are missing on Historicals. Druid 
no longer incorrectly returns partial results 
[#18025](https://github.com/apache/druid/pull/18025)
+
 ### Cluster management
 
+#### Configurable timeout for subtasks
+
+You can now configure a timeout for `index_parallel` and `compact` type tasks. 
Set the context parameter `subTaskTimeoutMillis` to the maximum time in 
milliseconds you want to wait before a subtask gets canceled. By default, 
there's no timeout.
+
+Using this config helps parent tasks fail sooner instead of getting stuck and 
can free up tasks slots from zombie tasks.

Review Comment:
   ```suggestion
   Using this config helps parent tasks fail sooner instead of being stuck 
running zombie sub-tasks.
   ```



##########
docs/release-info/release-notes.md:
##########
@@ -57,63 +57,308 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+#### Improved HTTP endpoints
+
+You can now use raw SQL in the HTTP body for `/druid/v2/sql` endpoints. You 
can set `Content-Type` to `text/plain` instead of `application/json`, so you 
can provide raw text that isn't escaped. 
+
+[#17937](https://github.com/apache/druid/pull/17937)
+
+Additionally, SQL requests can now include multiple SET statements to build up 
context for the final statement. For example, the following query results in a 
statement that includes the `timeout`, `useCache`, `populateCache`, and 
`vectorize` query context parameters: 
+
+```sql
+SET timeout = 20000;
+SET useCache = false;
+SET populateCache = false;
+SET vectorize = 'force';
+SELECT "channel", "page", sum("added") from "wikipedia" GROUP BY 1, 2
+```
+
+This improvement also works for INSERT and REPLACE queries using the MSQ task 
engine. Note that JDBC isn't supported.
+
+[#17974](https://github.com/apache/druid/pull/17974)
+### Cloning Historicals
+
+You can now configure clones for Historicals using the dynamic Coordinator 
configuration `cloneServers`. Cloned Historicals are useful for situations such 
as rolling updates where you want to launch a new Historical as a replacement 
for an existing one.
+
+Set the config to a map from the target Historical server to the source 
Historical:
+
+```
+  "cloneServers": {"historicalClone":"historicalOriginal"}
+```
+
+The clone doesn't participate in regular segment assignment or balancing. 
Instead, the Coordinator mirrors any segment assignment made to the original 
Historical onto the clone, so that the clone becomes an exact copy of the 
source. Segments on the clone Historical do not count towards replica counts 
either. If the original Historical disappears, the clone remains in the last 
known state of the source server until removed from the `cloneServers` config.
+
+When you query your data using the native query engine, you can prefer 
(`preferClones`), exclude (`excludeClones`), or include (`includeClones`) 
clones by setting the query context parameter `cloneQueryMode`. By default, 
clones are excluded.
+
+As part of this change, new Coordinator APIs are available. For more 
information, see [Coordinator APIs for clones](#coordinator-apis-for-clones).
+
+[#17863](https://github.com/apache/druid/pull/17863) 
[#17899](https://github.com/apache/druid/pull/17899) 
[#17956](https://github.com/apache/druid/pull/17956) 
+### Overlord kill tasks
+
+You can now run kill tasks directly on the Overlord itself. Running kill tasks 
on the Overlord provides the following benefits:
+
+- Unused segments are killed as soon as they're eligible and are killed faster
+- Doesn't require a task slot
+- Locked intervals are automatically skipped
+- Configuration is simpler
+- A large number of unused segments doesn't cause issues for them
+
+This feature is controlled by the following configs:
+
+- `druid.manager.segments.killUnused.enabled` - Whether the feature is enabled 
or not
+- `druid.manager.segments.killUnused.bufferPeriod` - The amount of time that a 
segment must be unused before it is able to be permanently removed from 
metadata and deep storage. This can serve as a buffer period to prevent data 
loss if data ends up being needed after being marked unused.
+
+As part of this feature, [new metrics](#overlord-kill-task-metrics) have been 
added.
+
+[#18028](https://github.com/apache/druid/pull/18028)
+
+### Preferred tier selection 
+You can now configure the Broker service to prefer  Historicals on a specific 
tier. This can help ensure Druid executes queries within the same availability 
zone if you have Druid deployed across multiple availability zones.
+
+[#18136](https://github.com/apache/druid/pull/18136)
+
+### Dart improvements NEED TO WRITE
+
+Dart specific endpoints have been removed and folded into SqlResource. 
[#18003](https://github.com/apache/druid/pull/18003)
+Added a new engine QueryContext parameter. The value can be native or 
msq-dart. The value determines the engine used to run the query. The default 
value is native. [#18003](https://github.com/apache/druid/pull/18003)
+
+MSQ Dart is now able to query real-time tasks by setting the query context 
parameter includeSegmentSource to realtime, in a similar way to MSQ tasks. 
[#18076](https://github.com/apache/druid/pull/18076)
+
+### `SegmentMetadataCache` on the Coordinator
+
+[#17996](https://github.com/apache/druid/pull/17996) 
[#17935](https://github.com/apache/druid/pull/17935)
+
 ## Functional area and related changes
 
 This section contains detailed release notes separated by areas.
 
 ### Web console
 
+#### SET statements
+
+The web console supports using SET statements to specify query context 
parameters. For example, if you include `SET timeout = 20000;` in your query, 
the timeout query context parameter is set. 
+
+[#17966](https://github.com/apache/druid/pull/17966)
+
 #### Other web console improvements
 
+- You can now assign tiered replications to tiers that aren't currently online 
[#18050](https://github.com/apache/druid/pull/18050)
+- You can now filter tasks by the error in the Task view 
[#18057](https://github.com/apache/druid/pull/18057)
+- Improved SQL autocomplete and added JSON autocomplete 
[#18126](https://github.com/apache/druid/pull/18126)
+- Updated the web console to use the Overlord APIs instead of Coordinator APIs 
when managing segments, such as marking them as unused 
[#18172](https://github.com/apache/druid/pull/18172)
+
 ### Ingestion
 
+- Improved concurrency for batch and streaming ingestion tasks 
[#17828](https://github.com/apache/druid/pull/17828)
+- Removed the `useMaxMemoryEstimates` config. When set to false, Druid used a 
much more accurate memory estimate that was introduced in Druid 0.23.0. That 
more accurate method is the only available method now. The config has defaulted 
to false for several releases 
[#17936](https://github.com/apache/druid/pull/17936)
+
 #### SQL-based ingestion
 
 ##### Other SQL-based ingestion improvements
 
 #### Streaming ingestion
 
+##### Multi-stream supervisors (experimental)
+
+You can now use more than one supervisor to ingest data into the same 
datasource. Include the `spec.dataSchema.dataSource` field to help identify the 
supervisor.

Review Comment:
   ```suggestion
   You can now use more than one supervisor to ingest data into the same 
datasource. Use the `id` field to distinguish between supervisors ingesting 
into the same datasource (identified by `spec.dataSchema.dataSource` for 
streaming supervisors).
   ```



##########
docs/release-info/release-notes.md:
##########
@@ -57,63 +57,308 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+#### Improved HTTP endpoints
+
+You can now use raw SQL in the HTTP body for `/druid/v2/sql` endpoints. You 
can set `Content-Type` to `text/plain` instead of `application/json`, so you 
can provide raw text that isn't escaped. 
+
+[#17937](https://github.com/apache/druid/pull/17937)
+
+Additionally, SQL requests can now include multiple SET statements to build up 
context for the final statement. For example, the following query results in a 
statement that includes the `timeout`, `useCache`, `populateCache`, and 
`vectorize` query context parameters: 
+
+```sql
+SET timeout = 20000;
+SET useCache = false;
+SET populateCache = false;
+SET vectorize = 'force';
+SELECT "channel", "page", sum("added") from "wikipedia" GROUP BY 1, 2
+```
+
+This improvement also works for INSERT and REPLACE queries using the MSQ task 
engine. Note that JDBC isn't supported.
+
+[#17974](https://github.com/apache/druid/pull/17974)
+### Cloning Historicals
+
+You can now configure clones for Historicals using the dynamic Coordinator 
configuration `cloneServers`. Cloned Historicals are useful for situations such 
as rolling updates where you want to launch a new Historical as a replacement 
for an existing one.
+
+Set the config to a map from the target Historical server to the source 
Historical:
+
+```
+  "cloneServers": {"historicalClone":"historicalOriginal"}
+```
+
+The clone doesn't participate in regular segment assignment or balancing. 
Instead, the Coordinator mirrors any segment assignment made to the original 
Historical onto the clone, so that the clone becomes an exact copy of the 
source. Segments on the clone Historical do not count towards replica counts 
either. If the original Historical disappears, the clone remains in the last 
known state of the source server until removed from the `cloneServers` config.
+
+When you query your data using the native query engine, you can prefer 
(`preferClones`), exclude (`excludeClones`), or include (`includeClones`) 
clones by setting the query context parameter `cloneQueryMode`. By default, 
clones are excluded.
+
+As part of this change, new Coordinator APIs are available. For more 
information, see [Coordinator APIs for clones](#coordinator-apis-for-clones).
+
+[#17863](https://github.com/apache/druid/pull/17863) 
[#17899](https://github.com/apache/druid/pull/17899) 
[#17956](https://github.com/apache/druid/pull/17956) 
+### Overlord kill tasks
+
+You can now run kill tasks directly on the Overlord itself. Running kill tasks 
on the Overlord provides the following benefits:
+
+- Unused segments are killed as soon as they're eligible and are killed faster
+- Doesn't require a task slot
+- Locked intervals are automatically skipped
+- Configuration is simpler
+- A large number of unused segments doesn't cause issues for them
+
+This feature is controlled by the following configs:
+
+- `druid.manager.segments.killUnused.enabled` - Whether the feature is enabled 
or not
+- `druid.manager.segments.killUnused.bufferPeriod` - The amount of time that a 
segment must be unused before it is able to be permanently removed from 
metadata and deep storage. This can serve as a buffer period to prevent data 
loss if data ends up being needed after being marked unused.
+
+As part of this feature, [new metrics](#overlord-kill-task-metrics) have been 
added.
+
+[#18028](https://github.com/apache/druid/pull/18028)
+
+### Preferred tier selection 
+You can now configure the Broker service to prefer  Historicals on a specific 
tier. This can help ensure Druid executes queries within the same availability 
zone if you have Druid deployed across multiple availability zones.
+
+[#18136](https://github.com/apache/druid/pull/18136)
+
+### Dart improvements NEED TO WRITE
+
+Dart specific endpoints have been removed and folded into SqlResource. 
[#18003](https://github.com/apache/druid/pull/18003)
+Added a new engine QueryContext parameter. The value can be native or 
msq-dart. The value determines the engine used to run the query. The default 
value is native. [#18003](https://github.com/apache/druid/pull/18003)
+
+MSQ Dart is now able to query real-time tasks by setting the query context 
parameter includeSegmentSource to realtime, in a similar way to MSQ tasks. 
[#18076](https://github.com/apache/druid/pull/18076)
+
+### `SegmentMetadataCache` on the Coordinator
+
+[#17996](https://github.com/apache/druid/pull/17996) 
[#17935](https://github.com/apache/druid/pull/17935)
+
 ## Functional area and related changes
 
 This section contains detailed release notes separated by areas.
 
 ### Web console
 
+#### SET statements
+
+The web console supports using SET statements to specify query context 
parameters. For example, if you include `SET timeout = 20000;` in your query, 
the timeout query context parameter is set. 
+
+[#17966](https://github.com/apache/druid/pull/17966)
+
 #### Other web console improvements
 
+- You can now assign tiered replications to tiers that aren't currently online 
[#18050](https://github.com/apache/druid/pull/18050)
+- You can now filter tasks by the error in the Task view 
[#18057](https://github.com/apache/druid/pull/18057)
+- Improved SQL autocomplete and added JSON autocomplete 
[#18126](https://github.com/apache/druid/pull/18126)
+- Updated the web console to use the Overlord APIs instead of Coordinator APIs 
when managing segments, such as marking them as unused 
[#18172](https://github.com/apache/druid/pull/18172)
+
 ### Ingestion
 
+- Improved concurrency for batch and streaming ingestion tasks 
[#17828](https://github.com/apache/druid/pull/17828)
+- Removed the `useMaxMemoryEstimates` config. When set to false, Druid used a 
much more accurate memory estimate that was introduced in Druid 0.23.0. That 
more accurate method is the only available method now. The config has defaulted 
to false for several releases 
[#17936](https://github.com/apache/druid/pull/17936)
+
 #### SQL-based ingestion
 
 ##### Other SQL-based ingestion improvements
 
 #### Streaming ingestion
 
+##### Multi-stream supervisors (experimental)
+
+You can now use more than one supervisor to ingest data into the same 
datasource. Include the `spec.dataSchema.dataSource` field to help identify the 
supervisor.
+
+When using this feature, make sure you set `useConcurrentLocks` to `true` for 
the `context` field in the supervisor spec.
+
+[#18149](https://github.com/apache/druid/pull/18149) 
[#18082](https://github.com/apache/druid/pull/18082)
+
+##### Supervisors and the underlying input stream
+
+Seekable stream supervisors (Kafka, Kinesis, and Rabbit) can no longer update 
the underlying input stream (such as a topic for Kafka) that is persisted for 
it. This action was previously allowed by the API, but it isn't fully supported 
by the underlying system. Going forward, a request to make such a change 
results in a 400 error from the Supervisor API that explains why it isn't 
allowed. 

Review Comment:
   ```suggestion
   Seekable stream supervisors (Kafka, Kinesis, and Rabbit) can no longer be 
updated to ingest from a different input stream (such as a topic for Kafka). 
Since such a change is not fully supported by the underlying system, a request 
to make such a change will result in a 400 error. 
   ```



##########
docs/release-info/release-notes.md:
##########
@@ -57,63 +57,308 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+#### Improved HTTP endpoints
+
+You can now use raw SQL in the HTTP body for `/druid/v2/sql` endpoints. You 
can set `Content-Type` to `text/plain` instead of `application/json`, so you 
can provide raw text that isn't escaped. 
+
+[#17937](https://github.com/apache/druid/pull/17937)
+
+Additionally, SQL requests can now include multiple SET statements to build up 
context for the final statement. For example, the following query results in a 
statement that includes the `timeout`, `useCache`, `populateCache`, and 
`vectorize` query context parameters: 
+
+```sql
+SET timeout = 20000;
+SET useCache = false;
+SET populateCache = false;
+SET vectorize = 'force';
+SELECT "channel", "page", sum("added") from "wikipedia" GROUP BY 1, 2
+```
+
+This improvement also works for INSERT and REPLACE queries using the MSQ task 
engine. Note that JDBC isn't supported.
+
+[#17974](https://github.com/apache/druid/pull/17974)
+### Cloning Historicals
+
+You can now configure clones for Historicals using the dynamic Coordinator 
configuration `cloneServers`. Cloned Historicals are useful for situations such 
as rolling updates where you want to launch a new Historical as a replacement 
for an existing one.
+
+Set the config to a map from the target Historical server to the source 
Historical:
+
+```
+  "cloneServers": {"historicalClone":"historicalOriginal"}
+```
+
+The clone doesn't participate in regular segment assignment or balancing. 
Instead, the Coordinator mirrors any segment assignment made to the original 
Historical onto the clone, so that the clone becomes an exact copy of the 
source. Segments on the clone Historical do not count towards replica counts 
either. If the original Historical disappears, the clone remains in the last 
known state of the source server until removed from the `cloneServers` config.
+
+When you query your data using the native query engine, you can prefer 
(`preferClones`), exclude (`excludeClones`), or include (`includeClones`) 
clones by setting the query context parameter `cloneQueryMode`. By default, 
clones are excluded.
+
+As part of this change, new Coordinator APIs are available. For more 
information, see [Coordinator APIs for clones](#coordinator-apis-for-clones).
+
+[#17863](https://github.com/apache/druid/pull/17863) 
[#17899](https://github.com/apache/druid/pull/17899) 
[#17956](https://github.com/apache/druid/pull/17956) 
+### Overlord kill tasks
+
+You can now run kill tasks directly on the Overlord itself. Running kill tasks 
on the Overlord provides the following benefits:
+
+- Unused segments are killed as soon as they're eligible and are killed faster
+- Doesn't require a task slot
+- Locked intervals are automatically skipped
+- Configuration is simpler
+- A large number of unused segments doesn't cause issues for them

Review Comment:
   adapted from the PR description:
   
   ```suggestion
   - kill unused segments as soon as they become eligible.
   - do not take up task slots.
   - finish faster as they use optimized metadata queries and do not launch a 
new JVM. 
   - kill a small number of segments per task, to ensure that locks on an 
interval are not held for too long.
   - skip locked intervals to avoid head-of-line blocking.
   - require minimal configuration.
   - can keep up with a large number of unused segments in the cluster.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to