ektravel commented on code in PR #16412:
URL: https://github.com/apache/druid/pull/16412#discussion_r1598626182


##########
docs/release-info/release-notes.md:
##########
@@ -57,50 +57,670 @@ For tips about how to write a good release note, see 
[Release notes](https://git
 
 This section contains important information about new and existing features.
 
+### Centralized datasource schema (alpha)
+
+You can now configure Druid to centralize schema management using the 
Coordinator service. Previously, Brokers needed to query data nodes and tasks 
for segment schemas. Centralizing datasource schemas can improve startup time 
for Brokers and the efficiency of your deployment.
+
+If enabled, the following changes occur:
+
+- Realtime segment schema changes get periodically pushed to the Coordinator
+- Tasks publish segment schemas and metadata to the metadata database
+- The Coordinator service polls the schema and segment metadata to build 
datasource schemas
+- Brokers fetch datasource schemas from the Coordinator when possible. If not, 
the Broker builds the schema.
+
+This behavior is currently opt-in. To enable this feature, set the following 
configs:
+
+- In your common runtime properties, set 
`druid.centralizedDatasourceSchema.enabled` to true.
+- If you're using MiddleManagers, you also need to set 
`druid.indexer.fork.property.druid.centralizedDatasourceSchema.enabled` to true 
in your MiddleManager runtime properties.
+
+You can return to the previous behavior by changing the configs to false.
+
+You can configure the following properties to control how the Coordinator 
service handles unused segment schemas:
+
+|Name|Description|Required|Default|
+|-|-|-|-|
+|`druid.coordinator.kill.segmentSchema.on`| Boolean value for enabling 
automatic deletion of unused segment schemas. If set to true, the Coordinator 
service periodically identifies segment schemas that are not referenced by any 
used segment and marks them as unused. At a later point, these unused schemas 
are deleted. | No | True|
+|`druid.coordinator.kill.segmentSchema.period`| How often to do automatic 
deletion of segment schemas in [ISO 
8601](https://en.wikipedia.org/wiki/ISO_8601) duration format. Value must be 
equal to or greater than 
`druid.coordinator.period.metadataStoreManagementPeriod`. Only applies if 
`druid.coordinator.kill.segmentSchema.on` is set to true.| No| `P1D`|
+|`druid.coordinator.kill.segmentSchema.durationToRetain`| [ISO 
8601](https://en.wikipedia.org/wiki/ISO_8601) duration for the time a segment 
schema is retained for from when it's marked as unused. Only applies if 
`druid.coordinator.kill.segmentSchema.on` is set to true.| Yes, if 
`druid.coordinator.kill.segmentSchema.on` is set to true.| `P90D`|
+
+In addition there are new metrics for monitoring after enabling centralized 
datasource schemas:
+
+- `metadatacache/schemaPoll/count`
+- `metadatacache/schemaPoll/failed`
+- `metadatacache/schemaPoll/time`
+- `metadacache/init/time`
+- `metadatacache/refresh/count`
+- `metadatacache/refresh/time`
+- `metadatacache/backfill/count`
+- `metadatacache/finalizedSegmentMetadata/size`
+- `metadatacache/finalizedSegmentMetadata/count`
+- `metadatacache/finalizedSchemaPayload/count`
+- `metadatacache/temporaryMetadataQueryResults/count`
+- `metadatacache/temporaryPublishedMetadataQueryResults/count`
+
+For more information, see [Metrics](../operations/metrics.md).
+
+[#15817](https://github.com/apache/druid/pull/15817)
+
+### Support for window functions
+
+Added support for using window functions with the MSQ task engine as the query 
engine.
+
+[#15470](https://github.com/apache/druid/pull/15470)
+
+### Experimental extensions
+
+Druid 30.0.0 adds the following extensions.
+
+#### RabbitMQ extension
+
+A new RabbitMQ extension is available as a community contribution.
+The RabbitMQ extension (`druid-rabbit-indexing-service`) lets you manage the 
creation and lifetime of rabbit indexing tasks. These indexing tasks read 
events from [RabbitMQ](https://www.rabbitmq.com) through [super 
streams](https://www.rabbitmq.com/docs/streams#super-streams).
+
+As super streams allow exactly once delivery with full support for 
partitioning, they are compatible with Druid's modern ingestion algorithm, 
without the downsides of the prior RabbitMQ firehose.
+
+Note that this uses the RabbitMQ streams feature and not a conventional 
exchange. You need to make sure that your messages are in a super stream before 
consumption. For more information, see [RabbitMQ 
documentation](https://www.rabbitmq.com/docs).
+
+[#14137](https://github.com/apache/druid/pull/14137)
+
 ## Functional area and related changes
 
 This section contains detailed release notes separated by areas.
 
 ### Web console
 
+#### Search in tables and columns
+
+You can now use the **Query** view to search in tables and columns.
+
+![Use the sidebar to search in tables and columns in Query 
view](./assets/30.0.0-console-search.png)
+
+[#15990](https://github.com/apache/druid/pull/15990)
+
+#### Improved array ingestion UX
+
+Improved the array ingestion experience in the web console by:
+- Adding `arrayIngestMode` to the Run panel selection, making it more 
prominent.
+- Ensuring that the `arrayIngestMode: array` context parameter is only set 
when the user opts in to arrays.
+- Setting `arrayIngestMode: array` only if Druid detects that the ingestion 
spec includes dimensions specs of type `auto` + `castToType: ARRAY<...>`.
+
+![Run panel shows array ingest mode](./assets/30.0.0-run-panel.png)
+
+[#15927](https://github.com/apache/druid/pull/15927)
+
+#### Kafka input format
+
+Improved how the web console determines the input format for a Kafka source. 
Instead of defaulting to the Kafka input format for a Kafka source, the web 
console now only picks the Kafka input format if it detects any of the 
following in the Kafka sample: a key, headers, or more than one topic.
+
+[#16180](https://github.com/apache/druid/pull/16180)
+
+#### Improved handling of lookups during sampling
+
+Rather than sending a transform expression containing lookups to the sampler, 
Druid now substitutes the transform expression with a placeholder. This 
prevents the expression from blocking the flow.
+
+![Change the transform expression to a 
placeholder](./assets/30.0.0-sampler-lookups.png)
+
+[#16234](https://github.com/apache/druid/pull/16234)
+
 #### Other web console improvements
 
-### Ingestion
+* You can now set `maxCompactionTaskSlots` to zero to top compaction tasks 
[#15877](https://github.com/apache/druid/pull/15877)
+* The web console now suggests the `azureStorage` input type instead of the 
deprecated `azure` storage type 
[#15820](https://github.com/apache/druid/pull/15820)
+* The download query detail archive option is now more resilient when the 
detail archive is incomplete 
[#16071](https://github.com/apache/druid/pull/16071)
+* Added the fields **Avro bytes decoder** and **Proto bytes decoder** for 
their input formats [#15950](https://github.com/apache/druid/pull/15950)
+* Added support for exporting results for queries that use the MSQ task engine 
[#15969](https://github.com/apache/druid/pull/15969)
+* Improved the user experience when the web console is operating in manual 
capabilities mode [#16191](https://github.com/apache/druid/pull/16191)
+* Improved the web console to detect doubles better 
[#15998](https://github.com/apache/druid/pull/15998)
+* Improved the query timer as follows:
+  * Timer isn't shown if an error happens
+  * Timer resets if changing tabs while query is running
+  * Error state is lost if tab is switched twice
+
+  [#16235](https://github.com/apache/druid/pull/16235)
+* Fixed an issue with the 
[Tasks](https://druid.apache.org/docs/latest/operations/web-console#tasks) view 
returning incorrect values for **Created time** and **Duration** fields after 
the Overlord restarts [#16228](https://github.com/apache/druid/pull/16228)
+* Fixed the Azure icon not rendering in the web console 
[#16173](https://github.com/apache/druid/pull/16173)
+* Fixed the supervisor offset reset dialog in the web console 
[#16298](https://github.com/apache/druid/pull/16298)
+
+### General ingestion
+
+#### Improved Azure input source
+
+You can now ingest data from multiple storage accounts using the new 
`azureStorage` input source schema instead of the now deprecated `azure` input 
source schema. For example:
+
+```json
+...
+    "ioConfig": {
+      "type": "index_parallel",
+      "inputSource": {
+        "type": "azureStorage",
+        "objectGlob": "**.json",
+        "uris": ["azureStorage://storageAccount/container/prefix1/file.json", 
"azureStorage://storageAccount/container/prefix2/file2.json"]
+      },
+      "inputFormat": {
+        "type": "json"
+      },
+      ...
+    },
+...
+```
+
+[#15630](https://github.com/apache/druid/pull/15630)
+
+#### Data management API improvement
+
+You can now mark segments as used or unused within the specified interval 
using an optional list of versions. For example: `(interval, [versions])`. When 
`versions` is unspecified, all versions of segments in the `interval` are 
marked as used or unused, preserving the old behavior 
[#16141](https://github.com/apache/druid/pull/16141)
+
+The `segmentIds` filter in the [data management 
API](https://druid.apache.org/docs/latest/api-reference/data-management-api) 
payload is now parameterized in the database query 
[#16174](https://github.com/apache/druid/pull/16174)
+
+#### Nested columns performance improvement
+
+Nested column serialization now releases nested field compression buffers as 
soon as the nested field serialization is completed, which requires 
significantly less direct memory during segment serialization when many nested 
fields are present.
+
+[#16076](https://github.com/apache/druid/pull/16076)
+
+#### Segment allocation
+
+Druid now associated pending segments with the task groups that created them.
+
+Associating pending segments with the task groups facilitates clean up of 
unneeded segments as soon as all tasks in the group exit.
+Cleaning up pending segments helps delete entries immediately after tasks exit 
and can alleviate the load on the metadata store during segment allocation.
+This can also help with segment allocation failures due to conflicting pending 
segments that are no longer needed in some cases.
 
-#### SQL-based ingestion
+The change ensures that an append action upgrades a segment set which 
corresponds exactly to the pending segment upgrades made by the concurrent 
replace action, and eliminates any duplication in query results that may occur.
 
-##### Other SQL-based ingestion improvements
+[#16144](https://github.com/apache/druid/pull/16144)
 
-#### Streaming ingestion
+#### Improved task context reporting
 
-##### Other streaming ingestion improvements
+Added a new type of task report `TaskContextReport` for reporting task context.
+Starting with Druid 30.0.0, all tasks will include this report in the final 
JSON that Druid serves over the task report APIs or writes to files.
+The following is the new report structure for non-MSQ tasks:
+
+```json
+{
+   "ingestionStatsAndErrors": {
+      // existing report content
+   },
+   "taskContext": {
+      "taskId": "xyz",
+      "type": "taskContext",
+      "payload": {
+         "forceTimeChunkLock": true,
+         "useLineageBasedSegmentAllocation": true
+       }
+   }
+}
+```
+
+This change is backwards compatible as it only adds a new field at the 
top-level of the JSON and doesn't modify any existing fields.
+If your code or tests consume task reports, don't rely on the JSON to be a 
singleton map.
+
+[#16041](https://github.com/apache/druid/pull/16041)
+
+#### Improved handling of lock types
+
+You can now grant locks with different types (EXCLUSIVE, SHARED, APPEND, 
REPLACE) for the same interval within a task group to ensure a transition to a 
newer set of tasks without failure.
+
+Previously, changing lock types in the Supervisor could lead to segment 
allocation errors due to lock conflicts for the new tasks when the older tasks 
are still running.
+
+[#16369](https://github.com/apache/druid/pull/16369)
+
+#### Other ingestion improvements
+
+* Added indexer level task metrics to provide more visibility in task 
distribution [#15991](https://github.com/apache/druid/pull/15991)
+* Added more logging detail for S3 `RetryableS3OutputStream`&mdash;this can 
help to determine whether to adjust chunk size 
[#16117](https://github.com/apache/druid/pull/16117)
+* Added error code to failure type `InternalServerError` 
[#16186](https://github.com/apache/druid/pull/16186)
+* Added a new index for pending segments table for datasource and 
`task_allocator_id` columns [#16355](https://github.com/apache/druid/pull/16355)
+* Changed default value of `useMaxMemoryEstimates` for Hadoop jobs to false 
[#16280](https://github.com/apache/druid/pull/16280)
+* Fixed a bug in the `MarkOvershadowedSegmentsAsUnused` Coordinator duty to 
also consider segments that are overshadowed by a segment that requires zero 
replicas [#16181](https://github.com/apache/druid/pull/16181)
+* Fixed a bug in the `markUsed` and `markUnused` APIs where an empty set of 
segment IDs would be inconsistently treated as null or non-null in different 
scenarios [#16145](https://github.com/apache/druid/pull/16145)
+* Fixed a bug where `numSegmentsKilled` is reported incorrectly 
[#16103](https://github.com/apache/druid/pull/16103)
+* Fixed a bug where completion task reports are not being generated on 
`index_parallel` tasks [#16042](https://github.com/apache/druid/pull/16042)
+* Fixed an issue where concurrent replace skipped intervals locked by append 
locks during compaction [#16316](https://github.com/apache/druid/pull/16316)
+* Improved error messages when supervisor's checkpoint state is invalid 
[#16208](https://github.com/apache/druid/pull/16208)
+* Improved serialization of `TaskReportMap` 
[#16217](https://github.com/apache/druid/pull/16217)
+* Improved compaction segment read and published fields to include sequential 
compaction tasks [#16171](https://github.com/apache/druid/pull/16171)
+* Improved kill task so that it now accepts an optional list of unused segment 
versions to delete [#15994](https://github.com/apache/druid/pull/15994)
+* Improved logging when ingestion tasks try to get lookups from the 
Coordinator at startup [#16287](https://github.com/apache/druid/pull/16287)
+* Improved ingestion performance by parsing an input stream directly instead 
of converting it to a string and parsing the string as JSON 
[#15693](https://github.com/apache/druid/pull/15693)
+* Improved the creation of input row filter predicate in various batch tasks 
[#16196](https://github.com/apache/druid/pull/16196)
+* Improved how Druid fetches tasks from the Overlord to redact credentials 
[#16182](https://github.com/apache/druid/pull/16182)
+* Improved the `markUnused` API endpoint to handle an empty list of segment 
versions [#16198](https://github.com/apache/druid/pull/16198)
+* Improved the `segmentIds` filter in the `markUsed` API payload so that it's 
parameterized in the database query 
[#16174](https://github.com/apache/druid/pull/16174)
+* Optimized `isOvershadowed` when there is a unique minor version for an 
interval [#15952](https://github.com/apache/druid/pull/15952)
+* Removed `EntryExistsException` thrown when trying to insert a duplicate task 
in the metadata store&mdash;Druid now throws a `DruidException` with error code 
`entryAlreadyExists` [#14448](https://github.com/apache/druid/pull/14448)
+* The task status output for a failed task now includes the exception message 
[#16286](https://github.com/apache/druid/pull/16286)
+
+### SQL-based ingestion
+
+#### Manifest files for MSQ task engine exports
+
+Export queries that use the MSQ task engine now also create a manifest file at 
the destination, which lists the files created by the query.
+
+During a rolling update, older versions of workers don't return a list of 
exported files, and older Controllers don't create a manifest file. Therefore, 
export queries ran during this time might have incomplete manifests.
+
+[#15953](https://github.com/apache/druid/pull/15953)
+
+#### MSQ task report improvements
+
+Improved the task report for the MSQ task engine as follows:
+
+* A new field in the MSQ task report captures the milliseconds elapsed between 
when the worker task was first requested and when it fully started running. 
Actual work time can be calculated using `actualWorkTimeMS = durationMs - 
pendingMs` [#15966](https://github.com/apache/druid/pull/15966)
+* A new field `segmentReport` logs the type of the segment created and the 
reason behind the selection [#16175](https://github.com/apache/druid/pull/16175)
+
+#### Other SQL-based ingestion improvements
+
+* Added a new context parameter `storeCompactionState`. When set to `true`, 
Druid records the state of compaction for each segment in the 
`lastCompactionState` segment field 
[#15965](https://github.com/apache/druid/pull/15965)
+* Added `SortMerge` join support for `IS NOT DISTINCT FROM` operations 
[#16003](https://github.com/apache/druid/pull/16003)
+* Added support for selective loading of lookups so that MSQ task engine 
workers don't load unnecessary lookups 
[#16328](https://github.com/apache/druid/pull/16328)
+* Changed the controller checker for the MSQ task engine to check for closed 
only [#16161](https://github.com/apache/druid/pull/16161)
+* Fixed an incorrect check while generating MSQ task engine error report 
[#16273](https://github.com/apache/druid/pull/16273)
+* Improved the message you get when the MSQ task engine falls back to a 
broadcast join from a sort-merge 
[#16002](https://github.com/apache/druid/pull/16002)
+* Improved the speed of worker cancellation by bypassing unnecessary 
communication with the controller 
[#16158](https://github.com/apache/druid/pull/16158)
+* Runtime exceptions generated while writing frames now include the name of 
the column where they occurred 
[#16130](https://github.com/apache/druid/pull/16130)
+
+### Streaming ingestion
+
+#### Streaming completion reports
+
+Streaming Task completion reports now have an extra field `recordsProcessed`, 
which lists all the partitions processed by that task and a count of records 
for each partition. Use this field to see the actual throughput of tasks and 
make decision as to whether you should vertically or horizontally scale your 
workers.
+
+[#15930](https://github.com/apache/druid/pull/15930)
+
+#### Improved memory management for Kinesis 
+
+Kinesis ingestion memory tuning config is now simpler:
+
+* You no longer need to set the configs `recordsPerFetch` and `deaggregate`
+* `fetchThreads` can no longer exceed the budgeted amount of heap (100 MB or 
5%)
+* Use `recordBufferSizeBytes` to set a byte-based limit rather than 
records-based limit for the Kinesis fetch threads and main ingestion threads. 
We recommend setting this to 100 MB or 10% of heap, whichever is smaller.
+* Use `maxBytesPerPoll` to set a byte-based limit for how much data Druid 
polls from shared buffer at a time. Default is 1,000,000 bytes.
+
+As part of this change, the following properties have been deprecated:
+- `recordBufferSize`,  use `recordBufferSizeBytes` instead
+- `maxRecordsPerPoll`, use `maxBytesPerPoll` instead
+
+[#15360](https://github.com/apache/druid/pull/15360)
+
+#### Improved autoscaling for Kinesis streams
+
+The Kinesis autoscaler now considers max lag in minutes instead of total lag. 
To maintain backwards compatibility, this change is opt-in for existing Kinesis 
connections. To opt in, set `lagBased.lagAggregate` in your supervisor spec to 
`MAX`. New connections use max lag by default.
+
+[#16284](https://github.com/apache/druid/pull/16284)
+[#16314](https://github.com/apache/druid/pull/16314)
+
+#### Parallelized incremental segment creation
+
+You can now configure the number of threads used to  create and persist 
incremental segments on the disk using the `numPersistThreads` property. Use 
additional threads to parallelize the segment creation to prevent ingestion 
from stalling or pausing frequently as long as there are sufficient CPU 
resources available.
+
+[#13982](https://github.com/apache/druid/pull/13982/files)
+
+#### Improved Supervisor rolling restarts
+
+The `stopTaskCount` config now prioritizes stopping older tasks first. As part 
of this change, you must also explicitly set a value for `stopTaskCount`. It no 
longer defaults to the same value as `taskCount`.
+
+[#15859](https://github.com/apache/druid/pull/15859)
+
+#### Other streaming ingestion improvements
+
+* Improved concurrent replace to work with supervisors using concurrent locks 
[#15995](https://github.com/apache/druid/pull/15995)
+* Fixed an issue where updating a Kafka streaming supervisors topic from 
single to multi-topic (pattern), or vice versa, could cause old offsets to be 
ignored spuriously [#16190](https://github.com/apache/druid/pull/16190)
 
 ### Querying
 
+#### Dynamic table append
+
+You can now use the `TABLE(APPEND(...))` function to implicitly create unions 
based on table schemas. For example,  the two following queries are equivalent:
+
+```sql
+TABLE(APPEND('table1','table2','table3'))
+```
+
+and
+
+```sql
+SELECT column1,NULL AS column2,NULL AS column3 FROM table1
+UNION ALL
+SELECT NULL AS column1,column2,NULL AS column3 FROM table2
+UNION ALL
+SELECT column1,column2,column3 FROM table3
+```
+
+Note that if the same columns are defined with different input types, Druid 
uses the least restrictive column type.
+
+[#15897](https://github.com/apache/druid/pull/15897)
+
+#### Added SCALAR_IN_ARRAY function
+
+Added `SCALAR_IN_ARRAY` function for checking if a scalar expression appears 
in an array:
+
+`SCALAR_IN_ARRAY(expr, arr)`
+
+[#16306](https://github.com/apache/druid/pull/16306)
+
+#### Improved performance for AND filters
+
+Druid query processing now adaptively determines when children of AND filters 
should compute indexes and when to simply match rows during the scan based on 
selectivity of other filters. Known as filter partitioning, it can result in 
dramatic performance increases, depending on the order of filters in the query.
+
+For example, take a query like `SELECT SUM(longColumn) FROM druid.table WHERE 
stringColumn1 = '1000' AND stringColumn2 LIKE '%1%'`. Previously, Druid used 
indexes when processing filters if they are available. That's not always ideal; 
imagine if `stringColumn1 = '1000'` matches 100 rows. With indexes, we have to 
find every value of `stringColumn2 LIKE '%1%'` that is true to compute the 
indexes for the filter. If `stringColumn2` has more than 100 values, it ends up 
being worse than simply checking for a match in those 100 remaining rows.
+
+With the new logic, Druid now checks the selectivity of indexes as it 
processes each clause of the AND filter. If it determines it would take more 
work to compute the index than to match the remaining rows, Druid skips 
computing the index.
+
+The order you write filters in a WHERE clause of a query can improve the 
performance of your query. More improvements are coming, but you can try out 
the existing improvements by reordering a query. Put  less intensive to compute 
indexes such as IS NULL, =, and comparisons (`>`, `>=,` `<`, and `<=`) near the 
start of AND filters so that Druid more efficiently processes your queries. Not 
ordering your filters in this way won’t degrade performance from previous 
releases since the fallback behavior is what Druid did previously.
+
+[#15838](https://github.com/apache/druid/pull/15838)
+
+#### Improved PARTITIONED BY
+
+If you use the MSQ task engine to run queries, you can now use the following 
strings in addition to the supported ISO 8601 periods:
+
+- `HOUR` - Same as `'PT1H'`
+- `DAY` - Same as `'P1D'`
+- `MONTH` - Same as `'P1M'`
+- `YEAR` - Same as `'P1Y'`
+- `ALL TIME`
+- `ALL` - Alias for `ALL TIME`
+
+[#15836](https://github.com/apache/druid/pull/15836/)
+
+#### Improved filter bundles
+
+Improved filter bundles as follows:
+
+* Renamed the parameter `selectionRowCount` on `makeFilterBundle` to 
`applyRowCount`, and redefined as an upper bound on rows remaining after 
short-circuiting (rather than number of rows selected so far).
+This definition works better for OR filters, which pass through the
+FALSE set rather than the TRUE set to the next subfilter.
+* `AndFilter` uses `min(applyRowCount, indexIntersectionSize)` rather than 
using `selectionRowCount` for the first subfilter and `indexIntersectionSize` 
for each filter thereafter. This improves accuracy when the incoming 
`applyRowCount` is smaller than the row count from the first few indexes.
+* `OrFilter` uses `min(applyRowCount, totalRowCount - indexUnionSize)` rather 
than `applyRowCount` for subfilters. This allows an OR filter to pass
+information about short-circuiting to its subfilters.
+
+[#16292](https://github.com/apache/druid/pull/16292)
+
+#### Improved catalog tables
+
+You can validate complex target column types against source input expressions 
during DML INSERT/REPLACE operations.
+
+[#16223](https://github.com/apache/druid/pull/16223)
+
+You can now define catalog tables without explicit segment granularities. DML 
queries on such tables need to have the PARTITIONED BY clause specified. 
Alternatively, you can update the table to include a defined segment 
granularity for DML queries to be validated properly.
+
+[#16278](https://github.com/apache/druid/pull/16278)
+
+#### Double and null values in SQL type ARRAY
+
+You can now pass double and null values in SQL type ARRAY through dynamic 
parameters.
+
+For example:
+
+```json
+"parameters":[
+  {
+    "type":"ARRAY",
+    "value":[d1, d2, null]
+  }
+]
+```
+
+[#16274](https://github.com/apache/druid/pull/16274)
+
+#### Improved native queries
+
+Native queries can now group on nested columns and arrays.
+
+[#16068](https://github.com/apache/druid/pull/16068)
+
 #### Other querying improvements
 
+* `typedIn` filter can now run in replace-with-default mode 
[#16233](https://github.com/apache/druid/pull/16233)
+* Added support for numeric arrays to window functions and subquery 
materializations [#15917](https://github.com/apache/druid/pull/15917)
+* Added support for single value aggregated Group By queries for scalars 
[#15700](https://github.com/apache/druid/pull/15700)
+* Added support for column reordering with scan and sort style queries 
[#15815](https://github.com/apache/druid/pull/15815)
+* Added support for joins in decoupled mode 
[#15957](https://github.com/apache/druid/pull/15957)

Review Comment:
   Removed the release note for PR 15957



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to