bvaradar commented on a change in pull request #1683: URL: https://github.com/apache/hudi/pull/1683#discussion_r441120720
########## File path: docs/_pages/releases.md ########## @@ -3,8 +3,30 @@ title: "Releases" permalink: /releases layout: releases toc: true -last_modified_at: 2019-12-30T15:59:57-04:00 +last_modified_at: 2020-05-28T08:40:00-07:00 --- +## [Release 0.5.3](https://github.com/apache/hudi/releases/tag/release-0.5.3) ([docs](/docs/0.5.3-quick-start-guide.html)) + +### Download Information + * Source Release : [Apache Hudi 0.5.3 Source Release](https://downloads.apache.org/hudi/0.5.3/hudi-0.5.3.src.tgz) ([asc](https://downloads.apache.org/hudi/0.5.3/hudi-0.5.3.src.tgz.asc), [sha512](https://downloads.apache.org/hudi/0.5.3/hudi-0.5.3.src.tgz.sha512)) + * Apache Hudi jars corresponding to this release is available [here](https://repository.apache.org/#nexus-search;quick~hudi) + +### Migration Guide for this release + * This is a bug fix only release and no special migration steps needed when upgrading from 0.5.2. If you are upgrading from earlier releases “X”, please make sure you read the migration guide for each subsequent release between “X” and 0.5.3 + +### Release Highlights + * Hudi now supports `aliyun OSS` storage service. + * Embedded Timeline Server is enabled by default for both delta-streamer and spark datasource writes. This feature was in experimental mode before this release. Embedded Timeline Server caches file listing calls in Spark driver and serves them to Spark writer tasks. This reduces the number of file listings needed to be performed for each write. + * Incremental Cleaning is enabled by default for both delta-streamer and spark datasource writes. This feature was also in experimental mode before this release. In the steady state, incremental cleaning avoids the costly step of scanning all partitions and instead uses Hudi metadata to find files to be cleaned up. + * Delta-streamer config files can now be placed in different filesystem than actual data. + * Hudi Hive Sync now supports tables partitioned by date type column. + * Hudi Hive Sync now supports syncing directly via Hive MetaStore. Review comment: We need to add the command for how to invoke this. Let me find it and post it here. ########## File path: docs/_pages/releases.md ########## @@ -3,8 +3,30 @@ title: "Releases" permalink: /releases layout: releases toc: true -last_modified_at: 2019-12-30T15:59:57-04:00 +last_modified_at: 2020-05-28T08:40:00-07:00 --- +## [Release 0.5.3](https://github.com/apache/hudi/releases/tag/release-0.5.3) ([docs](/docs/0.5.3-quick-start-guide.html)) + +### Download Information + * Source Release : [Apache Hudi 0.5.3 Source Release](https://downloads.apache.org/hudi/0.5.3/hudi-0.5.3.src.tgz) ([asc](https://downloads.apache.org/hudi/0.5.3/hudi-0.5.3.src.tgz.asc), [sha512](https://downloads.apache.org/hudi/0.5.3/hudi-0.5.3.src.tgz.sha512)) + * Apache Hudi jars corresponding to this release is available [here](https://repository.apache.org/#nexus-search;quick~hudi) + +### Migration Guide for this release Review comment: Maybe we can add this. 0.5.3 is the first hudi release after graduation. As a result, all hudi jars will no longer have "-inubating" in the version name. In all the places where hudi version is referred, please make sure "-incubating" is no longer present. For example hudi-spark-bundle pom dependency would look like: <dependency> <groupId>org.apache.hudi</groupId> <artifactId>hudi-spark-bundle_2.12</artifactId> <version>0.5.3</version> </dependency> ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
