This is an automated email from the ASF dual-hosted git repository. alexey pushed a commit to branch branch-1.13.x in repository https://gitbox.apache.org/repos/asf/kudu.git
commit 265c41c87c82103a4646e3bbde11bcc75275a47b Author: Alexey Serbin <[email protected]> AuthorDate: Tue Sep 8 14:13:34 2020 -0700 [docs] remove duplicate entry from 1.13 release notes Removed a note that was duplicate to the following one: * The Spark KuduContext accumulator metrics now track operation counts per table instead of cumulatively for all tables. Change-Id: I186f7160d71e94ed54c0e583655e67e2c0486ae9 Reviewed-on: http://gerrit.cloudera.org:8080/16430 Tested-by: Kudu Jenkins Reviewed-by: Attila Bukor <[email protected]> --- docs/release_notes.adoc | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/release_notes.adoc b/docs/release_notes.adoc index 044d389..cfd8223 100644 --- a/docs/release_notes.adoc +++ b/docs/release_notes.adoc @@ -95,8 +95,6 @@ the `--tablet_apply_pool_overload_threshold_ms` Tablet Server’s flag to appropriate value, e.g. 250 (see link:https://issues.apache.org/jira/browse/KUDU-1587[KUDU-1587]). -* Operation accumulators in Spark KuduContext are now tracked on a per-table - basis. * Java client’s error collector can be resized (see link:https://issues.apache.org/jira/browse/KUDU-1422[KUDU-1422]). * Calls to the Kudu master server are now drastically reduced when using scan
