This is an automated email from the ASF dual-hosted git repository.

leesf pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 61fc206  [MINOR] Add padding to code area in release page (#1296)
61fc206 is described below

commit 61fc206cddaab63efc8e04355f7dfaf88cca4724
Author: lamber-ken <lamber...@163.com>
AuthorDate: Sun Feb 2 21:51:25 2020 +0800

    [MINOR] Add padding to code area in release page (#1296)
---
 docs/_docs/1_1_quick_start_guide.md   |  3 +-
 docs/_pages/releases.md               | 54 +++++++++++++++++------------------
 docs/_sass/hudi_style/_variables.scss |  2 +-
 3 files changed, 30 insertions(+), 29 deletions(-)

diff --git a/docs/_docs/1_1_quick_start_guide.md 
b/docs/_docs/1_1_quick_start_guide.md
index 4a9c1b3..256e560 100644
--- a/docs/_docs/1_1_quick_start_guide.md
+++ b/docs/_docs/1_1_quick_start_guide.md
@@ -16,7 +16,8 @@ Hudi works with Spark-2.x versions. You can follow 
instructions [here](https://s
 From the extracted directory run spark-shell with Hudi as:
 
 ```scala
-spark-2.4.4-bin-hadoop2.7/bin/spark-shell --packages 
org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4
 \
+spark-2.4.4-bin-hadoop2.7/bin/spark-shell \
+    --packages 
org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4
 \
     --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
 ```
 
diff --git a/docs/_pages/releases.md b/docs/_pages/releases.md
index 8797e84..a27f555 100644
--- a/docs/_pages/releases.md
+++ b/docs/_pages/releases.md
@@ -13,36 +13,36 @@ last_modified_at: 2019-12-30T15:59:57-04:00
  * Apache Hudi (incubating) jars corresponding to this release is available 
[here](https://repository.apache.org/#nexus-search;quick~hudi)
 
 ### Release Highlights
-* Dependency Version Upgrades
-    * Upgrade from Spark 2.1.0 to Spark 2.4.4
-    * Upgrade from Avro 1.7.7 to Avro 1.8.2
-    * Upgrade from Parquet 1.8.1 to Parquet 1.10.1
-    * Upgrade from Kafka 0.8.2.1 to Kafka 2.0.0 as a result of updating 
spark-streaming-kafka artifact from 0.8_2.11/2.12 to 0.10_2.11/2.12.
-* **IMPORTANT** This version requires your runtime spark version to be 
upgraded to 2.4+.
-* Hudi now supports both Scala 2.11 and Scala 2.12, please refer to [Build 
with Scala 2.12](https://github.com/apache/incubator-hudi#build-with-scala-212) 
to build with Scala 2.12.
-Also, the packages hudi-spark, hudi-utilities, hudi-spark-bundle and 
hudi-utilities-bundle are changed correspondingly to 
hudi-spark_{scala_version}, hudi-spark_{scala_version}, 
hudi-utilities_{scala_version}, hudi-spark-bundle_{scala_version} and 
hudi-utilities-bundle_{scala_version}.
-Note that scala_version here is one of (2.11, 2.12).
-* With 0.5.1, we added functionality to stop using renames for Hudi timeline 
metadata operations. This feature is automatically enabled for newly created 
Hudi tables. For existing tables, this feature is turned off by default. Please 
read this [section](https://hudi.apache.org/docs/deployment.html#upgrading), 
before enabling this feature for existing hudi tables.
-To enable the new hudi timeline layout which avoids renames, use the write 
config "hoodie.timeline.layout.version=1". Alternatively, you can use "repair 
overwrite-hoodie-props" to append the line "hoodie.timeline.layout.version=1" 
to hoodie.properties. Note that in any case, upgrade hudi readers (query 
engines) first with 0.5.1-incubating release before upgrading writer.
-* CLI supports `repair overwrite-hoodie-props` to overwrite the table's 
hoodie.properties with specified file, for one-time updates to table name or 
even enabling the new timeline layout above. Note that few queries may 
temporarily fail while the overwrite happens (few milliseconds).
-* DeltaStreamer CLI parameter for capturing table type is changed from 
--storage-type to --table-type. Refer to 
[wiki](https://cwiki.apache.org/confluence/display/HUDI/Design+And+Architecture)
 with more latest terminologies.
-* Configuration Value change for Kafka Reset Offset Strategies. Enum values 
are changed from LARGEST to LATEST, SMALLEST to EARLIEST for configuring Kafka 
reset offset strategies with configuration(auto.offset.reset) in deltastreamer.
-* When using spark-shell to give a quick peek at Hudi, please provide 
`--packages org.apache.spark:spark-avro_2.11:2.4.4`, more details would refer 
to [latest quickstart docs](https://hudi.apache.org/docs/quick-start-guide.html)
-* Key generator moved to separate package under org.apache.hudi.keygen. If you 
are using overridden key generator classes (configuration 
("hoodie.datasource.write.keygenerator.class")) that comes with hudi package, 
please ensure the fully qualified class name is changed accordingly.
-* Hive Sync tool will register RO tables for MOR with a _ro suffix, so query 
with _ro suffix. You would use `--skip-ro-suffix` in sync config in sync config 
to retain the old naming without the _ro suffix.
-* With 0.5.1, hudi-hadoop-mr-bundle which is used by query engines such as 
presto and hive includes shaded avro package to support hudi real time queries 
through these engines. Hudi supports pluggable logic for merging of records. 
Users provide their own implementation of 
[HoodieRecordPayload](https://github.com/apache/incubator-hudi/blob/master/hudi-common/src/main/java/org/apache/hudi/common/model/HoodieRecordPayload.java).
-If you are using this feature, you need to relocate the avro dependencies in 
your custom record payload class to be consistent with internal hudi shading. 
You need to add the following relocation when shading the package containing 
the record payload implementation.
-
- ```xml
-<relocation>
-    <pattern>org.apache.avro.</pattern>
-    <shadedPattern>org.apache.hudi.org.apache.avro.</shadedPattern>
-</relocation>
- ```
+ * Dependency Version Upgrades
+   - Upgrade from Spark 2.1.0 to Spark 2.4.4
+   - Upgrade from Avro 1.7.7 to Avro 1.8.2
+   - Upgrade from Parquet 1.8.1 to Parquet 1.10.1
+   - Upgrade from Kafka 0.8.2.1 to Kafka 2.0.0 as a result of updating 
spark-streaming-kafka artifact from 0.8_2.11/2.12 to 0.10_2.11/2.12.
+ * **IMPORTANT** This version requires your runtime spark version to be 
upgraded to 2.4+.
+ * Hudi now supports both Scala 2.11 and Scala 2.12, please refer to [Build 
with Scala 2.12](https://github.com/apache/incubator-hudi#build-with-scala-212) 
to build with Scala 2.12.
+   Also, the packages hudi-spark, hudi-utilities, hudi-spark-bundle and 
hudi-utilities-bundle are changed correspondingly to 
hudi-spark_{scala_version}, hudi-spark_{scala_version}, 
hudi-utilities_{scala_version}, hudi-spark-bundle_{scala_version} and 
hudi-utilities-bundle_{scala_version}.
+   Note that scala_version here is one of (2.11, 2.12).
+ * With 0.5.1, we added functionality to stop using renames for Hudi timeline 
metadata operations. This feature is automatically enabled for newly created 
Hudi tables. For existing tables, this feature is turned off by default. Please 
read this [section](https://hudi.apache.org/docs/deployment.html#upgrading), 
before enabling this feature for existing hudi tables.
+   To enable the new hudi timeline layout which avoids renames, use the write 
config "hoodie.timeline.layout.version=1". Alternatively, you can use "repair 
overwrite-hoodie-props" to append the line "hoodie.timeline.layout.version=1" 
to hoodie.properties. Note that in any case, upgrade hudi readers (query 
engines) first with 0.5.1-incubating release before upgrading writer.
+ * CLI supports `repair overwrite-hoodie-props` to overwrite the table's 
hoodie.properties with specified file, for one-time updates to table name or 
even enabling the new timeline layout above. Note that few queries may 
temporarily fail while the overwrite happens (few milliseconds).
+ * DeltaStreamer CLI parameter for capturing table type is changed from 
`--storage-type` to `--table-type`. Refer to 
[wiki](https://cwiki.apache.org/confluence/display/HUDI/Design+And+Architecture)
 with more latest terminologies.
+ * Configuration Value change for Kafka Reset Offset Strategies. Enum values 
are changed from LARGEST to LATEST, SMALLEST to EARLIEST for configuring Kafka 
reset offset strategies with configuration(auto.offset.reset) in deltastreamer.
+ * When using spark-shell to give a quick peek at Hudi, please provide 
`--packages org.apache.spark:spark-avro_2.11:2.4.4`, more details would refer 
to [latest quickstart docs](https://hudi.apache.org/docs/quick-start-guide.html)
+ * Key generator moved to separate package under org.apache.hudi.keygen. If 
you are using overridden key generator classes (configuration 
("hoodie.datasource.write.keygenerator.class")) that comes with hudi package, 
please ensure the fully qualified class name is changed accordingly.
+ * Hive Sync tool will register RO tables for MOR with a _ro suffix, so query 
with _ro suffix. You would use `--skip-ro-suffix` in sync config in sync config 
to retain the old naming without the _ro suffix.
+ * With 0.5.1, hudi-hadoop-mr-bundle which is used by query engines such as 
presto and hive includes shaded avro package to support hudi real time queries 
through these engines. Hudi supports pluggable logic for merging of records. 
Users provide their own implementation of 
[HoodieRecordPayload](https://github.com/apache/incubator-hudi/blob/master/hudi-common/src/main/java/org/apache/hudi/common/model/HoodieRecordPayload.java).
+   If you are using this feature, you need to relocate the avro dependencies 
in your custom record payload class to be consistent with internal hudi 
shading. You need to add the following relocation when shading the package 
containing the record payload implementation.
+
+   ```xml
+   <relocation>
+     <pattern>org.apache.avro.</pattern>
+     <shadedPattern>org.apache.hudi.org.apache.avro.</shadedPattern>
+   </relocation>
+   ```
 
  * Better delete support in DeltaStreamer, please refer to 
[blog](https://cwiki.apache.org/confluence/display/HUDI/2020/01/15/Delete+support+in+Hudi)
 for more info.
  * Support for AWS Database Migration Service(DMS) in DeltaStreamer, please 
refer to 
[blog](https://cwiki.apache.org/confluence/display/HUDI/2020/01/20/Change+Capture+Using+AWS+Database+Migration+Service+and+Hudi)
 for more info.
- * Support for DynamicBloomFilter. This is turned off by default, to enable 
the DynamicBloomFilter, please use the index config 
"hoodie.bloom.index.filter.type=DYNAMIC_V0".
+ * Support for DynamicBloomFilter. This is turned off by default, to enable 
the DynamicBloomFilter, please use the index config 
`hoodie.bloom.index.filter.type=DYNAMIC_V0`.
 
 ### Raw Release Notes
  The raw release notes are available 
[here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12346183)
diff --git a/docs/_sass/hudi_style/_variables.scss 
b/docs/_sass/hudi_style/_variables.scss
index 24b6625..5194fad 100644
--- a/docs/_sass/hudi_style/_variables.scss
+++ b/docs/_sass/hudi_style/_variables.scss
@@ -54,7 +54,7 @@ $light-gray: mix(#fff, $gray, 50%) !default;
 $lighter-gray: mix(#fff, $gray, 90%) !default;
 
 $background-color: #fff !default;
-$code-background-color: #fafafa !default;
+$code-background-color: #f3f3f3 !default;
 $code-background-color-dark: $light-gray !default;
 $text-color: $dark-gray !default;
 $muted-text-color: mix(#fff, $text-color, 35%) !default;

Reply via email to