2bethere commented on code in PR #14600:
URL: https://github.com/apache/druid/pull/14600#discussion_r1281389920


##########
docs/do-not-merge.md:
##########
@@ -0,0 +1,464 @@
+<!--Intentionally, there's no Apache license so that the GHA fails it. This 
file is not meant to be merged.
+
+- https://github.com/apache/druid/pull/14266 - we removed input source 
security from 26 (https://github.com/apache/druid/pull/14003). Should we not 
include this in 27 release notes?
+
+-->
+
+# Still need
+
+search for TBD 
+
+* Caveats for array column types
+* More info about Hadoop 2 deprecation
+* Info for parameter execution highlight
+* Caching cost for upgrade notes - is it this? 
[#14484](https://github.com/apache/druid/pull/14484)
+* User impact of https://github.com/apache/druid/pull/14048 for highlights
+* https://github.com/apache/druid/pull/14476 seems to only be documented here 
in release notes and the PR?
+* Vad's input for web console section
+* PR for Temporary storage as a runtime parameter. MSQ will start honoring 
changes to the Middle Manager.
+* What to do about input source security. There are no docs for it.
+* New query filters 
+* Confirm the correct naming in https://github.com/apache/druid/pull/14359 
(OSHI sys mon change). The changed files say one thing and the PR description 
says another
+* Community extensions. What do we want to say about iceberg and the other one 
(what was the other one?)
+
+Apache Druid 27.0.0 contains over $NUMBER_FEATURES new features, bug fixes, 
performance enhancements, documentation improvements, and additional test 
coverage from $NUMBER_OF_CONTRIBUTORS contributors.
+
+[See the complete set of changes for additional 
details](https://github.com/apache/druid/issues?q=is%3Aclosed+milestone%3A27.0+sort%3Aupdated-desc+),
 including [bug 
fixes](https://github.com/apache/druid/issues?q=is%3Aclosed+milestone%3A27.0+sort%3Aupdated-desc+label%3ABug).
+
+Review the upgrade notes and incompatible changes before you upgrade to Druid 
27.0.0.
+
+# Highlights
+
+<!-- HIGHLIGHTS H2. FOR EACH MAJOR FEATURE FOR THE RELEASE -->
+
+## Query from deep storage (experimental)
+
+Druid now supports querying segments that are stored only in deep storage. 
When you query from deep storage, you can query larger  data available for 
queries without necessarily having to scale your Historical processes to 
accommodate more data. To take advantage of the potential storage savings, make 
sure you configure your load rules to not load all your segments onto 
Historical processes. 
+
+Note that at least one segment of a datasource must be loaded onto a 
Historical process so that the Broker can plan the query. It can be any segment 
though.
+
+For more information, see the following:
+
+- [Query from deep 
storage](https://druid.apache.org/docs/latest/querying/query-deep-storage.html)
+- [Query from deep storage 
tutorial](https://druid.apache.org/docs/latest/tutorials/tutorial-query-deep-storage.html)
+
+[#14416](https://github.com/apache/druid/pull/14416) 
[#14512](https://github.com/apache/druid/pull/14512) 
[#14527](https://github.com/apache/druid/pull/14527)
+
+## Schema auto-discovery and array column types
+
+Type-aware schema auto-discovery is now generally available. Druid can 
determine the schema for the data you ingest rather than you having to manually 
define the schema.
+
+As part of the type-aware schema discovery improvements, array column types 
are now generally available. Druid can determine the column types for your 
schema and assign them to these array column types when you ingest data using 
type-aware schema auto-discovery with the `auto` column type.
+
+Keep the following in mind when using these features:
+
+- TBD
+
+For more information about this feature, see the following:
+
+-  [Type-aware schema 
discovery](https://druid.apache.org/docs/latest/ingestion/schema-design.html#type-aware-schema-discovery)
+-  [26.0.0 release notes for Schema 
auto-discovery](https://github.com/apache/druid/releases#26.0.0-highlights-auto-type-column-schema-%28experimental%29-schema-auto-discovery-%28experimental%29).
+- [26.0.0 release notes for array column 
types](https://github.com/apache/druid/releases#26.0.0-highlights-auto-type-column-schema-%28experimental%29).
+
+## Smart segment loading
+
+The Coordinator is now more stable and user-friendly and has the following 
behavior updates:
+
+- New segments that are underreplicated are now prioritized when you use the 
new `smartSegmentLoading` mode, which is enabled by default. Previously, it 
prioritized segments equally.
+- Items in segment load queues can be prioritized or canceled, improving 
reaction time for changes in the cluster and segment assignment decisions.
+- Leadership changes have less impact now, and the Coordinator doesn't get 
stuck even if re-election happens while a Coordinator run is in progress.
+
+Lastly, the `cost` balancer strategy performs much better now and is capable 
of moving more segments in a single Coordinator run. These improvements were 
made by borrowing ideas from the `cachingCost` strategy.
+
+For more information, see the following:
+
+- [Upgrade note for config changes related to smart segment 
loading](#segment-loading-config-changes)
+- [New coordinator metrics](#new-coordinator-metrics)
+- [Smart segment loading 
documentation](https://druid.apache.org/docs/latest/configuration/index.html#smart-segment-loading)
+
+[#13197](https://github.com/apache/druid/pull/13197) 
[#14385](https://github.com/apache/druid/pull/14385) 
[#14484](https://github.com/apache/druid/pull/14484)
+
+### Guardrail for subquery results
+
+Users can now add a guardrail to prevent subquery’s results from exceeding the 
set number of bytes by setting `druid.server.http.maxSubqueryRows` in the 
Broker's config or `maxSubqueryRows` in the query context. This guardrail is 
recommended over row-based limiting.
+
+This feature is experimental for now and defaults back to row-based limiting 
in case it fails to get the accurate size of the results consumed by the query.
+
+[#13952](https://github.com/apache/druid/pull/13952)
+
+### Added a new OSHI system monitor
+
+Added a new OSHI system monitor (`OshiSysMonitor`) to replace `SysMonitor`. 
The new monitor has a wider support for different machine architectures 
including ARM instances. We recommend switching to the new monitor. 
`SysMonitor` is now deprecated and will be removed in future releases.
+
+[#14359](https://github.com/apache/druid/pull/14359)
+
+
+## Java 17 support
+
+Druid now fully supports Java 17.
+
+[#14384](https://github.com/apache/druid/pull/14384)
+
+## Hadoop 2 deprecated
+
+Support for Hadoop 2 is now deprecated. It will be removed in a future release.

Review Comment:
   ```suggestion
   Many of the important dependent libraries that Druid uses no longer supports 
Hadoop 2. In order for us to stay current and have pathways to mitigate 
security vulnerabilities, the community has decided to deprecate support for 
Hadoop 2.x releases starting this release. Starting Druid 28.x, Hadoop 3.x is 
the only supported Hadoop version.
   Please consider migrating to MSQ based ingestion, native ingestion if you 
are using Hadoop 2.x for ingestion today. If migrating to Druid ingestion is 
not a possibility, you need to plan to upgrade your Hadoop infrastructure 
before upgrading to the next Druid release.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to