Fix typos detected by github.com/client9/misspell ## What changes were proposed in this pull request?
Fixing typos is sometimes very hard. It's not so easy to visually review them. Recently, I discovered a very useful tool for it, [misspell](https://github.com/client9/misspell). This pull request fixes minor typos detected by [misspell](https://github.com/client9/misspell) except for the false positives. If you would like me to work on other files as well, let me know. ## How was this patch tested? ### before ``` $ misspell . | grep -v '.js' R/pkg/R/SQLContext.R:354:43: "definiton" is a misspelling of "definition" R/pkg/R/SQLContext.R:424:43: "definiton" is a misspelling of "definition" R/pkg/R/SQLContext.R:445:43: "definiton" is a misspelling of "definition" R/pkg/R/SQLContext.R:495:43: "definiton" is a misspelling of "definition" NOTICE-binary:454:16: "containd" is a misspelling of "contained" R/pkg/R/context.R:46:43: "definiton" is a misspelling of "definition" R/pkg/R/context.R:74:43: "definiton" is a misspelling of "definition" R/pkg/R/DataFrame.R:591:48: "persistance" is a misspelling of "persistence" R/pkg/R/streaming.R:166:44: "occured" is a misspelling of "occurred" R/pkg/inst/worker/worker.R:65:22: "ouput" is a misspelling of "output" R/pkg/tests/fulltests/test_utils.R:106:25: "environemnt" is a misspelling of "environment" common/kvstore/src/test/java/org/apache/spark/util/kvstore/InMemoryStoreSuite.java:38:39: "existant" is a misspelling of "existent" common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java:83:39: "existant" is a misspelling of "existent" common/network-common/src/main/java/org/apache/spark/network/crypto/TransportCipher.java:243:46: "transfered" is a misspelling of "transferred" common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java:234:19: "transfered" is a misspelling of "transferred" common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java:238:63: "transfered" is a misspelling of "transferred" common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java:244:46: "transfered" is a misspelling of "transferred" common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java:276:39: "transfered" is a misspelling of "transferred" common/network-common/src/main/java/org/apache/spark/network/util/AbstractFileRegion.java:27:20: "transfered" is a misspelling of "transferred" common/unsafe/src/test/scala/org/apache/spark/unsafe/types/UTF8StringPropertyCheckSuite.scala:195:15: "orgin" is a misspelling of "origin" core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala:621:39: "gauranteed" is a misspelling of "guaranteed" core/src/main/scala/org/apache/spark/status/storeTypes.scala:113:29: "ect" is a misspelling of "etc" core/src/main/scala/org/apache/spark/storage/DiskStore.scala:282:18: "transfered" is a misspelling of "transferred" core/src/main/scala/org/apache/spark/util/ListenerBus.scala:64:17: "overriden" is a misspelling of "overridden" core/src/test/scala/org/apache/spark/ShuffleSuite.scala:211:7: "substracted" is a misspelling of "subtracted" core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:1922:49: "agriculteur" is a misspelling of "agriculture" core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:2468:84: "truely" is a misspelling of "truly" core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala:25:18: "persistance" is a misspelling of "persistence" core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala:26:69: "persistance" is a misspelling of "persistence" data/streaming/AFINN-111.txt:1219:0: "humerous" is a misspelling of "humorous" dev/run-pip-tests:55:28: "enviroments" is a misspelling of "environments" dev/run-pip-tests:91:37: "virutal" is a misspelling of "virtual" dev/merge_spark_pr.py:377:72: "accross" is a misspelling of "across" dev/merge_spark_pr.py:378:66: "accross" is a misspelling of "across" dev/run-pip-tests:126:25: "enviroments" is a misspelling of "environments" docs/configuration.md:1830:82: "overriden" is a misspelling of "overridden" docs/structured-streaming-programming-guide.md:525:45: "processs" is a misspelling of "processes" docs/structured-streaming-programming-guide.md:1165:61: "BETWEN" is a misspelling of "BETWEEN" docs/sql-programming-guide.md:1891:810: "behaivor" is a misspelling of "behavior" examples/src/main/python/sql/arrow.py:98:8: "substract" is a misspelling of "subtract" examples/src/main/python/sql/arrow.py:103:27: "substract" is a misspelling of "subtract" licenses/LICENSE-heapq.txt:5:63: "Stichting" is a misspelling of "Stitching" licenses/LICENSE-heapq.txt:6:2: "Mathematisch" is a misspelling of "Mathematics" licenses/LICENSE-heapq.txt:262:29: "Stichting" is a misspelling of "Stitching" licenses/LICENSE-heapq.txt:262:39: "Mathematisch" is a misspelling of "Mathematics" licenses/LICENSE-heapq.txt:269:49: "Stichting" is a misspelling of "Stitching" licenses/LICENSE-heapq.txt:269:59: "Mathematisch" is a misspelling of "Mathematics" licenses/LICENSE-heapq.txt:274:2: "STICHTING" is a misspelling of "STITCHING" licenses/LICENSE-heapq.txt:274:12: "MATHEMATISCH" is a misspelling of "MATHEMATICS" licenses/LICENSE-heapq.txt:276:29: "STICHTING" is a misspelling of "STITCHING" licenses/LICENSE-heapq.txt:276:39: "MATHEMATISCH" is a misspelling of "MATHEMATICS" licenses-binary/LICENSE-heapq.txt:5:63: "Stichting" is a misspelling of "Stitching" licenses-binary/LICENSE-heapq.txt:6:2: "Mathematisch" is a misspelling of "Mathematics" licenses-binary/LICENSE-heapq.txt:262:29: "Stichting" is a misspelling of "Stitching" licenses-binary/LICENSE-heapq.txt:262:39: "Mathematisch" is a misspelling of "Mathematics" licenses-binary/LICENSE-heapq.txt:269:49: "Stichting" is a misspelling of "Stitching" licenses-binary/LICENSE-heapq.txt:269:59: "Mathematisch" is a misspelling of "Mathematics" licenses-binary/LICENSE-heapq.txt:274:2: "STICHTING" is a misspelling of "STITCHING" licenses-binary/LICENSE-heapq.txt:274:12: "MATHEMATISCH" is a misspelling of "MATHEMATICS" licenses-binary/LICENSE-heapq.txt:276:29: "STICHTING" is a misspelling of "STITCHING" licenses-binary/LICENSE-heapq.txt:276:39: "MATHEMATISCH" is a misspelling of "MATHEMATICS" mllib/src/main/resources/org/apache/spark/ml/feature/stopwords/hungarian.txt:170:0: "teh" is a misspelling of "the" mllib/src/main/resources/org/apache/spark/ml/feature/stopwords/portuguese.txt:53:0: "eles" is a misspelling of "eels" mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala:99:20: "Euclidian" is a misspelling of "Euclidean" mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala:539:11: "Euclidian" is a misspelling of "Euclidean" mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala:77:36: "Teh" is a misspelling of "The" mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala:230:24: "inital" is a misspelling of "initial" mllib/src/main/scala/org/apache/spark/mllib/stat/MultivariateOnlineSummarizer.scala:276:9: "Euclidian" is a misspelling of "Euclidean" mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:237:26: "descripiton" is a misspelling of "descriptions" python/pyspark/find_spark_home.py:30:13: "enviroment" is a misspelling of "environment" python/pyspark/context.py:937:12: "supress" is a misspelling of "suppress" python/pyspark/context.py:938:12: "supress" is a misspelling of "suppress" python/pyspark/context.py:939:12: "supress" is a misspelling of "suppress" python/pyspark/context.py:940:12: "supress" is a misspelling of "suppress" python/pyspark/heapq3.py:6:63: "Stichting" is a misspelling of "Stitching" python/pyspark/heapq3.py:7:2: "Mathematisch" is a misspelling of "Mathematics" python/pyspark/heapq3.py:263:29: "Stichting" is a misspelling of "Stitching" python/pyspark/heapq3.py:263:39: "Mathematisch" is a misspelling of "Mathematics" python/pyspark/heapq3.py:270:49: "Stichting" is a misspelling of "Stitching" python/pyspark/heapq3.py:270:59: "Mathematisch" is a misspelling of "Mathematics" python/pyspark/heapq3.py:275:2: "STICHTING" is a misspelling of "STITCHING" python/pyspark/heapq3.py:275:12: "MATHEMATISCH" is a misspelling of "MATHEMATICS" python/pyspark/heapq3.py:277:29: "STICHTING" is a misspelling of "STITCHING" python/pyspark/heapq3.py:277:39: "MATHEMATISCH" is a misspelling of "MATHEMATICS" python/pyspark/heapq3.py:713:8: "probabilty" is a misspelling of "probability" python/pyspark/ml/clustering.py:1038:8: "Currenlty" is a misspelling of "Currently" python/pyspark/ml/stat.py:339:23: "Euclidian" is a misspelling of "Euclidean" python/pyspark/ml/regression.py:1378:20: "paramter" is a misspelling of "parameter" python/pyspark/mllib/stat/_statistics.py:262:8: "probabilty" is a misspelling of "probability" python/pyspark/rdd.py:1363:32: "paramter" is a misspelling of "parameter" python/pyspark/streaming/tests.py:825:42: "retuns" is a misspelling of "returns" python/pyspark/sql/tests.py:768:29: "initalization" is a misspelling of "initialization" python/pyspark/sql/tests.py:3616:31: "initalize" is a misspelling of "initialize" resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala:120:39: "arbitary" is a misspelling of "arbitrary" resource-managers/mesos/src/test/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala:26:45: "sucessfully" is a misspelling of "successfully" resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:358:27: "constaints" is a misspelling of "constraints" resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala:111:24: "senstive" is a misspelling of "sensitive" sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala:1063:5: "overwirte" is a misspelling of "overwrite" sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala:1348:17: "compatability" is a misspelling of "compatibility" sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala:77:36: "paramter" is a misspelling of "parameter" sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:1374:22: "precendence" is a misspelling of "precedence" sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala:238:27: "unnecassary" is a misspelling of "unnecessary" sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala:212:17: "whn" is a misspelling of "when" sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala:147:60: "timestmap" is a misspelling of "timestamp" sql/core/src/test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala:150:45: "precentage" is a misspelling of "percentage" sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala:135:29: "infered" is a misspelling of "inferred" sql/hive/src/test/resources/golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922:1:52: "occurance" is a misspelling of "occurrence" sql/hive/src/test/resources/golden/udf_instr-2-32da357fc754badd6e3898dcc8989182:1:52: "occurance" is a misspelling of "occurrence" sql/hive/src/test/resources/golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e:1:63: "occurance" is a misspelling of "occurrence" sql/hive/src/test/resources/golden/udf_locate-2-d9b5934457931447874d6bb7c13de478:1:63: "occurance" is a misspelling of "occurrence" sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8:9:79: "occurence" is a misspelling of "occurrence" sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8:13:110: "occurence" is a misspelling of "occurrence" sql/hive/src/test/resources/ql/src/test/queries/clientpositive/annotate_stats_join.q:46:105: "distint" is a misspelling of "distinct" sql/hive/src/test/resources/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q:29:3: "Currenly" is a misspelling of "Currently" sql/hive/src/test/resources/ql/src/test/queries/clientpositive/avro_partitioned.q:72:15: "existant" is a misspelling of "existent" sql/hive/src/test/resources/ql/src/test/queries/clientpositive/decimal_udf.q:25:3: "substraction" is a misspelling of "subtraction" sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q:16:51: "funtion" is a misspelling of "function" sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby_sort_8.q:15:30: "issueing" is a misspelling of "issuing" sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala:669:52: "wiht" is a misspelling of "with" sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java:474:9: "Refering" is a misspelling of "Referring" ``` ### after ``` $ misspell . | grep -v '.js' common/network-common/src/main/java/org/apache/spark/network/util/AbstractFileRegion.java:27:20: "transfered" is a misspelling of "transferred" core/src/main/scala/org/apache/spark/status/storeTypes.scala:113:29: "ect" is a misspelling of "etc" core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala:1922:49: "agriculteur" is a misspelling of "agriculture" data/streaming/AFINN-111.txt:1219:0: "humerous" is a misspelling of "humorous" licenses/LICENSE-heapq.txt:5:63: "Stichting" is a misspelling of "Stitching" licenses/LICENSE-heapq.txt:6:2: "Mathematisch" is a misspelling of "Mathematics" licenses/LICENSE-heapq.txt:262:29: "Stichting" is a misspelling of "Stitching" licenses/LICENSE-heapq.txt:262:39: "Mathematisch" is a misspelling of "Mathematics" licenses/LICENSE-heapq.txt:269:49: "Stichting" is a misspelling of "Stitching" licenses/LICENSE-heapq.txt:269:59: "Mathematisch" is a misspelling of "Mathematics" licenses/LICENSE-heapq.txt:274:2: "STICHTING" is a misspelling of "STITCHING" licenses/LICENSE-heapq.txt:274:12: "MATHEMATISCH" is a misspelling of "MATHEMATICS" licenses/LICENSE-heapq.txt:276:29: "STICHTING" is a misspelling of "STITCHING" licenses/LICENSE-heapq.txt:276:39: "MATHEMATISCH" is a misspelling of "MATHEMATICS" licenses-binary/LICENSE-heapq.txt:5:63: "Stichting" is a misspelling of "Stitching" licenses-binary/LICENSE-heapq.txt:6:2: "Mathematisch" is a misspelling of "Mathematics" licenses-binary/LICENSE-heapq.txt:262:29: "Stichting" is a misspelling of "Stitching" licenses-binary/LICENSE-heapq.txt:262:39: "Mathematisch" is a misspelling of "Mathematics" licenses-binary/LICENSE-heapq.txt:269:49: "Stichting" is a misspelling of "Stitching" licenses-binary/LICENSE-heapq.txt:269:59: "Mathematisch" is a misspelling of "Mathematics" licenses-binary/LICENSE-heapq.txt:274:2: "STICHTING" is a misspelling of "STITCHING" licenses-binary/LICENSE-heapq.txt:274:12: "MATHEMATISCH" is a misspelling of "MATHEMATICS" licenses-binary/LICENSE-heapq.txt:276:29: "STICHTING" is a misspelling of "STITCHING" licenses-binary/LICENSE-heapq.txt:276:39: "MATHEMATISCH" is a misspelling of "MATHEMATICS" mllib/src/main/resources/org/apache/spark/ml/feature/stopwords/hungarian.txt:170:0: "teh" is a misspelling of "the" mllib/src/main/resources/org/apache/spark/ml/feature/stopwords/portuguese.txt:53:0: "eles" is a misspelling of "eels" mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala:99:20: "Euclidian" is a misspelling of "Euclidean" mllib/src/main/scala/org/apache/spark/ml/stat/Summarizer.scala:539:11: "Euclidian" is a misspelling of "Euclidean" mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala:77:36: "Teh" is a misspelling of "The" mllib/src/main/scala/org/apache/spark/mllib/stat/MultivariateOnlineSummarizer.scala:276:9: "Euclidian" is a misspelling of "Euclidean" python/pyspark/heapq3.py:6:63: "Stichting" is a misspelling of "Stitching" python/pyspark/heapq3.py:7:2: "Mathematisch" is a misspelling of "Mathematics" python/pyspark/heapq3.py:263:29: "Stichting" is a misspelling of "Stitching" python/pyspark/heapq3.py:263:39: "Mathematisch" is a misspelling of "Mathematics" python/pyspark/heapq3.py:270:49: "Stichting" is a misspelling of "Stitching" python/pyspark/heapq3.py:270:59: "Mathematisch" is a misspelling of "Mathematics" python/pyspark/heapq3.py:275:2: "STICHTING" is a misspelling of "STITCHING" python/pyspark/heapq3.py:275:12: "MATHEMATISCH" is a misspelling of "MATHEMATICS" python/pyspark/heapq3.py:277:29: "STICHTING" is a misspelling of "STITCHING" python/pyspark/heapq3.py:277:39: "MATHEMATISCH" is a misspelling of "MATHEMATICS" python/pyspark/ml/stat.py:339:23: "Euclidian" is a misspelling of "Euclidean" ``` Closes #22070 from seratch/fix-typo. Authored-by: Kazuhiro Sera <sera...@gmail.com> Signed-off-by: Sean Owen <sro...@gmail.com> Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8ec25cd6 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8ec25cd6 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8ec25cd6 Branch: refs/heads/master Commit: 8ec25cd67e7ac4a8165917a4211e17aa8f7b394d Parents: 4855d5c Author: Kazuhiro Sera <sera...@gmail.com> Authored: Sat Aug 11 21:23:36 2018 -0500 Committer: Sean Owen <sro...@gmail.com> Committed: Sat Aug 11 21:23:36 2018 -0500 ---------------------------------------------------------------------- NOTICE-binary | 4 ++-- R/pkg/R/DataFrame.R | 2 +- R/pkg/R/SQLContext.R | 8 ++++---- R/pkg/R/context.R | 4 ++-- R/pkg/R/streaming.R | 2 +- R/pkg/inst/worker/worker.R | 2 +- R/pkg/tests/fulltests/test_utils.R | 2 +- .../org/apache/spark/util/kvstore/InMemoryStoreSuite.java | 2 +- .../java/org/apache/spark/util/kvstore/LevelDBSuite.java | 2 +- .../org/apache/spark/network/crypto/TransportCipher.java | 2 +- .../java/org/apache/spark/network/sasl/SaslEncryption.java | 8 ++++---- .../spark/unsafe/types/UTF8StringPropertyCheckSuite.scala | 4 ++-- .../main/scala/org/apache/spark/api/python/PythonRDD.scala | 2 +- core/src/main/scala/org/apache/spark/storage/DiskStore.scala | 2 +- core/src/main/scala/org/apache/spark/util/ListenerBus.scala | 2 +- core/src/test/scala/org/apache/spark/ShuffleSuite.scala | 2 +- .../scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala | 2 +- .../org/apache/spark/storage/FlatmapIteratorSuite.scala | 4 ++-- dev/merge_spark_pr.py | 4 ++-- dev/run-pip-tests | 6 +++--- docs/configuration.md | 2 +- docs/sql-programming-guide.md | 2 +- docs/structured-streaming-programming-guide.md | 4 ++-- examples/src/main/python/sql/arrow.py | 4 ++-- .../org/apache/spark/mllib/clustering/StreamingKMeans.scala | 2 +- .../scala/org/apache/spark/ml/clustering/KMeansSuite.scala | 2 +- python/pyspark/context.py | 8 ++++---- python/pyspark/find_spark_home.py | 2 +- python/pyspark/heapq3.py | 2 +- python/pyspark/ml/clustering.py | 2 +- python/pyspark/ml/regression.py | 2 +- python/pyspark/mllib/stat/_statistics.py | 2 +- python/pyspark/rdd.py | 2 +- python/pyspark/sql/tests.py | 4 ++-- python/pyspark/streaming/tests.py | 2 +- .../scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala | 2 +- .../spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala | 2 +- .../deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala | 2 +- .../org/apache/spark/deploy/yarn/YarnClusterSuite.scala | 2 +- .../apache/spark/sql/catalyst/catalog/SessionCatalog.scala | 2 +- .../spark/sql/catalyst/expressions/datetimeExpressions.scala | 2 +- .../sql/catalyst/plans/logical/basicLogicalOperators.scala | 2 +- .../main/scala/org/apache/spark/sql/internal/SQLConf.scala | 2 +- .../apache/spark/sql/catalyst/analysis/AnalysisSuite.scala | 2 +- .../catalyst/expressions/ConditionalExpressionSuite.scala | 2 +- .../streaming/StreamingSymmetricHashJoinHelper.scala | 2 +- .../test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala | 2 +- .../sql/execution/datasources/csv/CSVInferSchemaSuite.scala | 2 +- .../org/apache/hive/service/cli/session/HiveSessionImpl.java | 2 +- .../golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922 | 2 +- .../golden/udf_instr-2-32da357fc754badd6e3898dcc8989182 | 2 +- .../golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e | 2 +- .../golden/udf_locate-2-d9b5934457931447874d6bb7c13de478 | 2 +- .../golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8 | 4 ++-- .../ql/src/test/queries/clientpositive/annotate_stats_join.q | 2 +- .../src/test/queries/clientpositive/auto_sortmerge_join_11.q | 2 +- .../ql/src/test/queries/clientpositive/avro_partitioned.q | 2 +- .../ql/src/test/queries/clientpositive/decimal_udf.q | 2 +- .../queries/clientpositive/groupby2_map_multi_distinct.q | 2 +- .../ql/src/test/queries/clientpositive/groupby_sort_8.q | 2 +- .../org/apache/spark/sql/sources/HadoopFsRelationTest.scala | 2 +- 61 files changed, 81 insertions(+), 81 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/NOTICE-binary ---------------------------------------------------------------------- diff --git a/NOTICE-binary b/NOTICE-binary index 3155c38..ad256aa 100644 --- a/NOTICE-binary +++ b/NOTICE-binary @@ -476,7 +476,7 @@ which has the following notices: PureJavaCrc32C from apache-hadoop-common http://hadoop.apache.org/ (Apache 2.0 license) - This library containd statically linked libstdc++. This inclusion is allowed by + This library contains statically linked libstdc++. This inclusion is allowed by "GCC RUntime Library Exception" http://gcc.gnu.org/onlinedocs/libstdc++/manual/license.html @@ -1192,4 +1192,4 @@ Apache Solr (http://lucene.apache.org/solr/) Copyright 2014 The Apache Software Foundation Apache Mahout (http://mahout.apache.org/) -Copyright 2014 The Apache Software Foundation \ No newline at end of file +Copyright 2014 The Apache Software Foundation http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/R/pkg/R/DataFrame.R ---------------------------------------------------------------------- diff --git a/R/pkg/R/DataFrame.R b/R/pkg/R/DataFrame.R index 70eb7a8..471ada1 100644 --- a/R/pkg/R/DataFrame.R +++ b/R/pkg/R/DataFrame.R @@ -588,7 +588,7 @@ setMethod("cache", #' \url{http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence}. #' #' @param x the SparkDataFrame to persist. -#' @param newLevel storage level chosen for the persistance. See available options in +#' @param newLevel storage level chosen for the persistence. See available options in #' the description. #' #' @family SparkDataFrame functions http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/R/pkg/R/SQLContext.R ---------------------------------------------------------------------- diff --git a/R/pkg/R/SQLContext.R b/R/pkg/R/SQLContext.R index 429dd5d..c819a7d 100644 --- a/R/pkg/R/SQLContext.R +++ b/R/pkg/R/SQLContext.R @@ -351,7 +351,7 @@ setMethod("toDF", signature(x = "RDD"), read.json.default <- function(path, ...) { sparkSession <- getSparkSession() options <- varargsToStrEnv(...) - # Allow the user to have a more flexible definiton of the text file path + # Allow the user to have a more flexible definition of the text file path paths <- as.list(suppressWarnings(normalizePath(path))) read <- callJMethod(sparkSession, "read") read <- callJMethod(read, "options", options) @@ -421,7 +421,7 @@ jsonRDD <- function(sqlContext, rdd, schema = NULL, samplingRatio = 1.0) { read.orc <- function(path, ...) { sparkSession <- getSparkSession() options <- varargsToStrEnv(...) - # Allow the user to have a more flexible definiton of the ORC file path + # Allow the user to have a more flexible definition of the ORC file path path <- suppressWarnings(normalizePath(path)) read <- callJMethod(sparkSession, "read") read <- callJMethod(read, "options", options) @@ -442,7 +442,7 @@ read.orc <- function(path, ...) { read.parquet.default <- function(path, ...) { sparkSession <- getSparkSession() options <- varargsToStrEnv(...) - # Allow the user to have a more flexible definiton of the Parquet file path + # Allow the user to have a more flexible definition of the Parquet file path paths <- as.list(suppressWarnings(normalizePath(path))) read <- callJMethod(sparkSession, "read") read <- callJMethod(read, "options", options) @@ -492,7 +492,7 @@ parquetFile <- function(x, ...) { read.text.default <- function(path, ...) { sparkSession <- getSparkSession() options <- varargsToStrEnv(...) - # Allow the user to have a more flexible definiton of the text file path + # Allow the user to have a more flexible definition of the text file path paths <- as.list(suppressWarnings(normalizePath(path))) read <- callJMethod(sparkSession, "read") read <- callJMethod(read, "options", options) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/R/pkg/R/context.R ---------------------------------------------------------------------- diff --git a/R/pkg/R/context.R b/R/pkg/R/context.R index 3e996a5..7e77ea4 100644 --- a/R/pkg/R/context.R +++ b/R/pkg/R/context.R @@ -43,7 +43,7 @@ getMinPartitions <- function(sc, minPartitions) { #' lines <- textFile(sc, "myfile.txt") #'} textFile <- function(sc, path, minPartitions = NULL) { - # Allow the user to have a more flexible definiton of the text file path + # Allow the user to have a more flexible definition of the text file path path <- suppressWarnings(normalizePath(path)) # Convert a string vector of paths to a string containing comma separated paths path <- paste(path, collapse = ",") @@ -71,7 +71,7 @@ textFile <- function(sc, path, minPartitions = NULL) { #' rdd <- objectFile(sc, "myfile") #'} objectFile <- function(sc, path, minPartitions = NULL) { - # Allow the user to have a more flexible definiton of the text file path + # Allow the user to have a more flexible definition of the text file path path <- suppressWarnings(normalizePath(path)) # Convert a string vector of paths to a string containing comma separated paths path <- paste(path, collapse = ",") http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/R/pkg/R/streaming.R ---------------------------------------------------------------------- diff --git a/R/pkg/R/streaming.R b/R/pkg/R/streaming.R index fc83463..5eccbdc 100644 --- a/R/pkg/R/streaming.R +++ b/R/pkg/R/streaming.R @@ -163,7 +163,7 @@ setMethod("isActive", #' #' @param x a StreamingQuery. #' @param timeout time to wait in milliseconds, if omitted, wait indefinitely until \code{stopQuery} -#' is called or an error has occured. +#' is called or an error has occurred. #' @return TRUE if query has terminated within the timeout period; nothing if timeout is not #' specified. #' @rdname awaitTermination http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/R/pkg/inst/worker/worker.R ---------------------------------------------------------------------- diff --git a/R/pkg/inst/worker/worker.R b/R/pkg/inst/worker/worker.R index ba458d2..c2adf61 100644 --- a/R/pkg/inst/worker/worker.R +++ b/R/pkg/inst/worker/worker.R @@ -62,7 +62,7 @@ compute <- function(mode, partition, serializer, deserializer, key, # Transform the result data.frame back to a list of rows output <- split(output, seq(nrow(output))) } else { - # Serialize the ouput to a byte array + # Serialize the output to a byte array stopifnot(serializer == "byte") } } else { http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/R/pkg/tests/fulltests/test_utils.R ---------------------------------------------------------------------- diff --git a/R/pkg/tests/fulltests/test_utils.R b/R/pkg/tests/fulltests/test_utils.R index f0292ab..b2b6f34 100644 --- a/R/pkg/tests/fulltests/test_utils.R +++ b/R/pkg/tests/fulltests/test_utils.R @@ -103,7 +103,7 @@ test_that("cleanClosure on R functions", { expect_true("l" %in% ls(env)) expect_true("f" %in% ls(env)) expect_equal(get("l", envir = env, inherits = FALSE), l) - # "y" should be in the environemnt of g. + # "y" should be in the environment of g. newG <- get("g", envir = env, inherits = FALSE) env <- environment(newG) expect_equal(length(ls(env)), 1) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/common/kvstore/src/test/java/org/apache/spark/util/kvstore/InMemoryStoreSuite.java ---------------------------------------------------------------------- diff --git a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/InMemoryStoreSuite.java b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/InMemoryStoreSuite.java index 510b305..9abf26f 100644 --- a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/InMemoryStoreSuite.java +++ b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/InMemoryStoreSuite.java @@ -35,7 +35,7 @@ public class InMemoryStoreSuite { try { store.read(CustomType1.class, t.key); - fail("Expected exception for non-existant object."); + fail("Expected exception for non-existent object."); } catch (NoSuchElementException nsee) { // Expected. } http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java ---------------------------------------------------------------------- diff --git a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java index b8123ac..205f7df 100644 --- a/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java +++ b/common/kvstore/src/test/java/org/apache/spark/util/kvstore/LevelDBSuite.java @@ -80,7 +80,7 @@ public class LevelDBSuite { try { db.read(CustomType1.class, t.key); - fail("Expected exception for non-existant object."); + fail("Expected exception for non-existent object."); } catch (NoSuchElementException nsee) { // Expected. } http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/common/network-common/src/main/java/org/apache/spark/network/crypto/TransportCipher.java ---------------------------------------------------------------------- diff --git a/common/network-common/src/main/java/org/apache/spark/network/crypto/TransportCipher.java b/common/network-common/src/main/java/org/apache/spark/network/crypto/TransportCipher.java index 452408d..b64e4b7 100644 --- a/common/network-common/src/main/java/org/apache/spark/network/crypto/TransportCipher.java +++ b/common/network-common/src/main/java/org/apache/spark/network/crypto/TransportCipher.java @@ -240,7 +240,7 @@ public class TransportCipher { @Override public long transferTo(WritableByteChannel target, long position) throws IOException { - Preconditions.checkArgument(position == transfered(), "Invalid position."); + Preconditions.checkArgument(position == transferred(), "Invalid position."); do { if (currentEncrypted == null) { http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java ---------------------------------------------------------------------- diff --git a/common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java b/common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java index 1dcf132..e127568 100644 --- a/common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java +++ b/common/network-common/src/main/java/org/apache/spark/network/sasl/SaslEncryption.java @@ -231,17 +231,17 @@ class SaslEncryption { * data into memory at once, and can avoid ballooning memory usage when transferring large * messages such as shuffle blocks. * - * The {@link #transfered()} counter also behaves a little funny, in that it won't go forward + * The {@link #transferred()} counter also behaves a little funny, in that it won't go forward * until a whole chunk has been written. This is done because the code can't use the actual * number of bytes written to the channel as the transferred count (see {@link #count()}). * Instead, once an encrypted chunk is written to the output (including its header), the - * size of the original block will be added to the {@link #transfered()} amount. + * size of the original block will be added to the {@link #transferred()} amount. */ @Override public long transferTo(final WritableByteChannel target, final long position) throws IOException { - Preconditions.checkArgument(position == transfered(), "Invalid position."); + Preconditions.checkArgument(position == transferred(), "Invalid position."); long reportedWritten = 0L; long actuallyWritten = 0L; @@ -273,7 +273,7 @@ class SaslEncryption { currentChunkSize = 0; currentReportedBytes = 0; } - } while (currentChunk == null && transfered() + reportedWritten < count()); + } while (currentChunk == null && transferred() + reportedWritten < count()); // Returning 0 triggers a backoff mechanism in netty which may harm performance. Instead, // we return 1 until we can (i.e. until the reported count would actually match the size http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/common/unsafe/src/test/scala/org/apache/spark/unsafe/types/UTF8StringPropertyCheckSuite.scala ---------------------------------------------------------------------- diff --git a/common/unsafe/src/test/scala/org/apache/spark/unsafe/types/UTF8StringPropertyCheckSuite.scala b/common/unsafe/src/test/scala/org/apache/spark/unsafe/types/UTF8StringPropertyCheckSuite.scala index 48004e8..7d3331f 100644 --- a/common/unsafe/src/test/scala/org/apache/spark/unsafe/types/UTF8StringPropertyCheckSuite.scala +++ b/common/unsafe/src/test/scala/org/apache/spark/unsafe/types/UTF8StringPropertyCheckSuite.scala @@ -192,8 +192,8 @@ class UTF8StringPropertyCheckSuite extends FunSuite with GeneratorDrivenProperty val nullalbeSeq = Gen.listOf(Gen.oneOf[String](null: String, randomString)) test("concat") { - def concat(orgin: Seq[String]): String = - if (orgin.contains(null)) null else orgin.mkString + def concat(origin: Seq[String]): String = + if (origin.contains(null)) null else origin.mkString forAll { (inputs: Seq[String]) => assert(UTF8String.concat(inputs.map(toUTF8): _*) === toUTF8(inputs.mkString)) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala ---------------------------------------------------------------------- diff --git a/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala b/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala index 8c2ce88..c3db60a 100644 --- a/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala +++ b/core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala @@ -622,7 +622,7 @@ private[spark] class PythonAccumulatorV2( override def merge(other: AccumulatorV2[Array[Byte], JList[Array[Byte]]]): Unit = synchronized { val otherPythonAccumulator = other.asInstanceOf[PythonAccumulatorV2] // This conditional isn't strictly speaking needed - merging only currently happens on the - // driver program - but that isn't gauranteed so incase this changes. + // driver program - but that isn't guaranteed so incase this changes. if (serverHost == null) { // We are on the worker super.merge(otherPythonAccumulator) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---------------------------------------------------------------------- diff --git a/core/src/main/scala/org/apache/spark/storage/DiskStore.scala b/core/src/main/scala/org/apache/spark/storage/DiskStore.scala index 39249d4..ef526fd 100644 --- a/core/src/main/scala/org/apache/spark/storage/DiskStore.scala +++ b/core/src/main/scala/org/apache/spark/storage/DiskStore.scala @@ -279,7 +279,7 @@ private class ReadableChannelFileRegion(source: ReadableByteChannel, blockSize: override def transferred(): Long = _transferred override def transferTo(target: WritableByteChannel, pos: Long): Long = { - assert(pos == transfered(), "Invalid position.") + assert(pos == transferred(), "Invalid position.") var written = 0L var lastWrite = -1L http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/core/src/main/scala/org/apache/spark/util/ListenerBus.scala ---------------------------------------------------------------------- diff --git a/core/src/main/scala/org/apache/spark/util/ListenerBus.scala b/core/src/main/scala/org/apache/spark/util/ListenerBus.scala index d4474a9..a8f1068 100644 --- a/core/src/main/scala/org/apache/spark/util/ListenerBus.scala +++ b/core/src/main/scala/org/apache/spark/util/ListenerBus.scala @@ -61,7 +61,7 @@ private[spark] trait ListenerBus[L <: AnyRef, E] extends Logging { } /** - * This can be overriden by subclasses if there is any extra cleanup to do when removing a + * This can be overridden by subclasses if there is any extra cleanup to do when removing a * listener. In particular AsyncEventQueues can clean up queues in the LiveListenerBus. */ def removeListenerOnError(listener: L): Unit = { http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/core/src/test/scala/org/apache/spark/ShuffleSuite.scala ---------------------------------------------------------------------- diff --git a/core/src/test/scala/org/apache/spark/ShuffleSuite.scala b/core/src/test/scala/org/apache/spark/ShuffleSuite.scala index d11eaf8..456f97b 100644 --- a/core/src/test/scala/org/apache/spark/ShuffleSuite.scala +++ b/core/src/test/scala/org/apache/spark/ShuffleSuite.scala @@ -208,7 +208,7 @@ abstract class ShuffleSuite extends SparkFunSuite with Matchers with LocalSparkC val pairs2: RDD[MutablePair[Int, String]] = sc.parallelize(data2, 2) val results = new SubtractedRDD(pairs1, pairs2, new HashPartitioner(2)).collect() results should have length (1) - // substracted rdd return results as Tuple2 + // subtracted rdd return results as Tuple2 results(0) should be ((3, 33)) } http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---------------------------------------------------------------------- diff --git a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala index 5e095ce..3fbe636 100644 --- a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala +++ b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala @@ -2465,7 +2465,7 @@ class DAGSchedulerSuite extends SparkFunSuite with LocalSparkContext with TimeLi runEvent(makeCompletionEvent( taskSets(1).tasks(1), Success, makeMapStatus("hostA", 2))) - // Both tasks in rddB should be resubmitted, because none of them has succeeded truely. + // Both tasks in rddB should be resubmitted, because none of them has succeeded truly. // Complete the task(stageId=1, stageAttemptId=1, partitionId=0) successfully. // Task(stageId=1, stageAttemptId=1, partitionId=1) of this new active stage attempt // is still running. http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala ---------------------------------------------------------------------- diff --git a/core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala b/core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala index b21c91f..4282850 100644 --- a/core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala +++ b/core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala @@ -22,8 +22,8 @@ import org.apache.spark._ class FlatmapIteratorSuite extends SparkFunSuite with LocalSparkContext { /* Tests the ability of Spark to deal with user provided iterators from flatMap * calls, that may generate more data then available memory. In any - * memory based persistance Spark will unroll the iterator into an ArrayBuffer - * for caching, however in the case that the use defines DISK_ONLY persistance, + * memory based persistence Spark will unroll the iterator into an ArrayBuffer + * for caching, however in the case that the use defines DISK_ONLY persistence, * the iterator will be fed directly to the serializer and written to disk. * * This also tests the ObjectOutputStream reset rate. When serializing using the http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/dev/merge_spark_pr.py ---------------------------------------------------------------------- diff --git a/dev/merge_spark_pr.py b/dev/merge_spark_pr.py index 7a6f7d2..fe05282 100755 --- a/dev/merge_spark_pr.py +++ b/dev/merge_spark_pr.py @@ -374,8 +374,8 @@ def standardize_jira_ref(text): >>> standardize_jira_ref("[SPARK-979] a LRU scheduler for load balancing in TaskSchedulerImpl") '[SPARK-979] a LRU scheduler for load balancing in TaskSchedulerImpl' >>> standardize_jira_ref( - ... "SPARK-1094 Support MiMa for reporting binary compatibility accross versions.") - '[SPARK-1094] Support MiMa for reporting binary compatibility accross versions.' + ... "SPARK-1094 Support MiMa for reporting binary compatibility across versions.") + '[SPARK-1094] Support MiMa for reporting binary compatibility across versions.' >>> standardize_jira_ref("[WIP] [SPARK-1146] Vagrant support for Spark") '[SPARK-1146][WIP] Vagrant support for Spark' >>> standardize_jira_ref( http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/dev/run-pip-tests ---------------------------------------------------------------------- diff --git a/dev/run-pip-tests b/dev/run-pip-tests index 7271d10..60cf4d8 100755 --- a/dev/run-pip-tests +++ b/dev/run-pip-tests @@ -52,7 +52,7 @@ if hash virtualenv 2>/dev/null && [ ! -n "$USE_CONDA" ]; then PYTHON_EXECS+=('python3') fi elif hash conda 2>/dev/null; then - echo "Using conda virtual enviroments" + echo "Using conda virtual environments" PYTHON_EXECS=('3.5') USE_CONDA=1 else @@ -88,7 +88,7 @@ for python in "${PYTHON_EXECS[@]}"; do virtualenv --python=$python "$VIRTUALENV_PATH" source "$VIRTUALENV_PATH"/bin/activate fi - # Upgrade pip & friends if using virutal env + # Upgrade pip & friends if using virtual env if [ ! -n "$USE_CONDA" ]; then pip install --upgrade pip pypandoc wheel numpy fi @@ -123,7 +123,7 @@ for python in "${PYTHON_EXECS[@]}"; do cd "$FWDIR" - # conda / virtualenv enviroments need to be deactivated differently + # conda / virtualenv environments need to be deactivated differently if [ -n "$USE_CONDA" ]; then source deactivate else http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/docs/configuration.md ---------------------------------------------------------------------- diff --git a/docs/configuration.md b/docs/configuration.md index 4911abb..9c4742a 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -1827,7 +1827,7 @@ Apart from these, the following properties are also available, and may be useful executors w.r.t. full parallelism. Defaults to 1.0 to give maximum parallelism. 0.5 will divide the target number of executors by 2 - The target number of executors computed by the dynamicAllocation can still be overriden + The target number of executors computed by the dynamicAllocation can still be overridden by the <code>spark.dynamicAllocation.minExecutors</code> and <code>spark.dynamicAllocation.maxExecutors</code> settings </td> http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/docs/sql-programming-guide.md ---------------------------------------------------------------------- diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index 9adb86a..d9ebc3c 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -1888,7 +1888,7 @@ working with timestamps in `pandas_udf`s to get the best performance, see - Since Spark 2.4, renaming a managed table to existing location is not allowed. An exception is thrown when attempting to rename a managed table to existing location. - Since Spark 2.4, the type coercion rules can automatically promote the argument types of the variadic SQL functions (e.g., IN/COALESCE) to the widest common type, no matter how the input arguments order. In prior Spark versions, the promotion could fail in some specific orders (e.g., TimestampType, IntegerType and StringType) and throw an exception. - Since Spark 2.4, Spark has enabled non-cascading SQL cache invalidation in addition to the traditional cache invalidation mechanism. The non-cascading cache invalidation mechanism allows users to remove a cache without impacting its dependent caches. This new cache invalidation mechanism is used in scenarios where the data of the cache to be removed is still valid, e.g., calling unpersist() on a Dataset, or dropping a temporary view. This allows users to free up memory and keep the desired caches valid at the same time. - - In version 2.3 and earlier, `to_utc_timestamp` and `from_utc_timestamp` respect the timezone in the input timestamp string, which breaks the assumption that the input timestamp is in a specific timezone. Therefore, these 2 functions can return unexpected results. In version 2.4 and later, this problem has been fixed. `to_utc_timestamp` and `from_utc_timestamp` will return null if the input timestamp string contains timezone. As an example, `from_utc_timestamp('2000-10-10 00:00:00', 'GMT+1')` will return `2000-10-10 01:00:00` in both Spark 2.3 and 2.4. However, `from_utc_timestamp('2000-10-10 00:00:00+00:00', 'GMT+1')`, assuming a local timezone of GMT+8, will return `2000-10-10 09:00:00` in Spark 2.3 but `null` in 2.4. For people who don't care about this problem and want to retain the previous behaivor to keep their query unchanged, you can set `spark.sql.function.rejectTimezoneInString` to false. This option will be removed in Spark 3.0 and should only be used as a temporary w orkaround. + - In version 2.3 and earlier, `to_utc_timestamp` and `from_utc_timestamp` respect the timezone in the input timestamp string, which breaks the assumption that the input timestamp is in a specific timezone. Therefore, these 2 functions can return unexpected results. In version 2.4 and later, this problem has been fixed. `to_utc_timestamp` and `from_utc_timestamp` will return null if the input timestamp string contains timezone. As an example, `from_utc_timestamp('2000-10-10 00:00:00', 'GMT+1')` will return `2000-10-10 01:00:00` in both Spark 2.3 and 2.4. However, `from_utc_timestamp('2000-10-10 00:00:00+00:00', 'GMT+1')`, assuming a local timezone of GMT+8, will return `2000-10-10 09:00:00` in Spark 2.3 but `null` in 2.4. For people who don't care about this problem and want to retain the previous behavior to keep their query unchanged, you can set `spark.sql.function.rejectTimezoneInString` to false. This option will be removed in Spark 3.0 and should only be used as a temporary w orkaround. - In version 2.3 and earlier, Spark converts Parquet Hive tables by default but ignores table properties like `TBLPROPERTIES (parquet.compression 'NONE')`. This happens for ORC Hive table properties like `TBLPROPERTIES (orc.compress 'NONE')` in case of `spark.sql.hive.convertMetastoreOrc=true`, too. Since Spark 2.4, Spark respects Parquet/ORC specific table properties while converting Parquet/ORC Hive tables. As an example, `CREATE TABLE t(id int) STORED AS PARQUET TBLPROPERTIES (parquet.compression 'NONE')` would generate Snappy parquet files during insertion in Spark 2.3, and in Spark 2.4, the result would be uncompressed parquet files. - Since Spark 2.0, Spark converts Parquet Hive tables by default for better performance. Since Spark 2.4, Spark converts ORC Hive tables by default, too. It means Spark uses its own ORC support by default instead of Hive SerDe. As an example, `CREATE TABLE t(id int) STORED AS ORC` would be handled with Hive SerDe in Spark 2.3, and in Spark 2.4, it would be converted into Spark's ORC data source table and ORC vectorization would be applied. To set `false` to `spark.sql.hive.convertMetastoreOrc` restores the previous behavior. - In version 2.3 and earlier, CSV rows are considered as malformed if at least one column value in the row is malformed. CSV parser dropped such rows in the DROPMALFORMED mode or outputs an error in the FAILFAST mode. Since Spark 2.4, CSV row is considered as malformed only when it contains malformed column values requested from CSV datasource, other values can be ignored. As an example, CSV file contains the "id,name" header and one row "1234". In Spark 2.4, selection of the id column consists of a row with one column value 1234 but in Spark 2.3 and earlier it is empty in the DROPMALFORMED mode. To restore the previous behavior, set `spark.sql.csv.parser.columnPruning.enabled` to `false`. http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/docs/structured-streaming-programming-guide.md ---------------------------------------------------------------------- diff --git a/docs/structured-streaming-programming-guide.md b/docs/structured-streaming-programming-guide.md index 0842e8d..b832f71 100644 --- a/docs/structured-streaming-programming-guide.md +++ b/docs/structured-streaming-programming-guide.md @@ -522,7 +522,7 @@ Here are the details of all the sources in Spark. <br/> <code>maxFilesPerTrigger</code>: maximum number of new files to be considered in every trigger (default: no max) <br/> - <code>latestFirst</code>: whether to processs the latest new files first, useful when there is a large backlog of files (default: false) + <code>latestFirst</code>: whether to process the latest new files first, useful when there is a large backlog of files (default: false) <br/> <code>fileNameOnly</code>: whether to check new files based on only the filename instead of on the full path (default: false). With this set to `true`, the following files would be considered as the same file, because their filenames, "dataset.txt", are the same: <br/> @@ -1162,7 +1162,7 @@ In other words, you will have to do the following additional steps in the join. old rows of one input is not going to be required (i.e. will not satisfy the time constraint) for matches with the other input. This constraint can be defined in one of the two ways. - 1. Time range join conditions (e.g. `...JOIN ON leftTime BETWEN rightTime AND rightTime + INTERVAL 1 HOUR`), + 1. Time range join conditions (e.g. `...JOIN ON leftTime BETWEEN rightTime AND rightTime + INTERVAL 1 HOUR`), 1. Join on event-time windows (e.g. `...JOIN ON leftTimeWindow = rightTimeWindow`). http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/examples/src/main/python/sql/arrow.py ---------------------------------------------------------------------- diff --git a/examples/src/main/python/sql/arrow.py b/examples/src/main/python/sql/arrow.py index 6c4510d..5eb164b 100644 --- a/examples/src/main/python/sql/arrow.py +++ b/examples/src/main/python/sql/arrow.py @@ -95,12 +95,12 @@ def grouped_map_pandas_udf_example(spark): ("id", "v")) @pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP) - def substract_mean(pdf): + def subtract_mean(pdf): # pdf is a pandas.DataFrame v = pdf.v return pdf.assign(v=v - v.mean()) - df.groupby("id").apply(substract_mean).show() + df.groupby("id").apply(subtract_mean).show() # +---+----+ # | id| v| # +---+----+ http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala ---------------------------------------------------------------------- diff --git a/mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala b/mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala index 7a5e520..ed8543d 100644 --- a/mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala +++ b/mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala @@ -227,7 +227,7 @@ class StreamingKMeans @Since("1.2.0") ( require(centers.size == k, s"Number of initial centers must be ${k} but got ${centers.size}") require(weights.forall(_ >= 0), - s"Weight for each inital center must be nonnegative but got [${weights.mkString(" ")}]") + s"Weight for each initial center must be nonnegative but got [${weights.mkString(" ")}]") model = new StreamingKMeansModel(centers, weights) this } http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala ---------------------------------------------------------------------- diff --git a/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala b/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala index 9b0b526..ccbceab 100644 --- a/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala +++ b/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala @@ -234,7 +234,7 @@ class KMeansSuite extends MLTest with DefaultReadWriteTest with PMMLReadWriteTes val oldKmeansModel = new MLlibKMeansModel(clusterCenters) val kmeansModel = new KMeansModel("", oldKmeansModel) def checkModel(pmml: PMML): Unit = { - // Check the header descripiton is what we expect + // Check the header description is what we expect assert(pmml.getHeader.getDescription === "k-means clustering") // check that the number of fields match the single vector size assert(pmml.getDataDictionary.getNumberOfFields === clusterCenters(0).size) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/context.py ---------------------------------------------------------------------- diff --git a/python/pyspark/context.py b/python/pyspark/context.py index 0ff4f5b..40208ec 100644 --- a/python/pyspark/context.py +++ b/python/pyspark/context.py @@ -934,10 +934,10 @@ class SparkContext(object): >>> def stop_job(): ... sleep(5) ... sc.cancelJobGroup("job_to_cancel") - >>> supress = lock.acquire() - >>> supress = threading.Thread(target=start_job, args=(10,)).start() - >>> supress = threading.Thread(target=stop_job).start() - >>> supress = lock.acquire() + >>> suppress = lock.acquire() + >>> suppress = threading.Thread(target=start_job, args=(10,)).start() + >>> suppress = threading.Thread(target=stop_job).start() + >>> suppress = lock.acquire() >>> print(result) Cancelled http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/find_spark_home.py ---------------------------------------------------------------------- diff --git a/python/pyspark/find_spark_home.py b/python/pyspark/find_spark_home.py index 9cf0e8c..9c4ed46 100755 --- a/python/pyspark/find_spark_home.py +++ b/python/pyspark/find_spark_home.py @@ -27,7 +27,7 @@ import sys def _find_spark_home(): """Find the SPARK_HOME.""" - # If the enviroment has SPARK_HOME set trust it. + # If the environment has SPARK_HOME set trust it. if "SPARK_HOME" in os.environ: return os.environ["SPARK_HOME"] http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/heapq3.py ---------------------------------------------------------------------- diff --git a/python/pyspark/heapq3.py b/python/pyspark/heapq3.py index 6af084a..37a2914 100644 --- a/python/pyspark/heapq3.py +++ b/python/pyspark/heapq3.py @@ -710,7 +710,7 @@ def merge(iterables, key=None, reverse=False): # value seen being in the 100 most extreme values is 100/101. # * If the value is a new extreme value, the cost of inserting it into the # heap is 1 + log(k, 2). -# * The probabilty times the cost gives: +# * The probability times the cost gives: # (k/i) * (1 + log(k, 2)) # * Summing across the remaining n-k elements gives: # sum((k/i) * (1 + log(k, 2)) for i in range(k+1, n+1)) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/ml/clustering.py ---------------------------------------------------------------------- diff --git a/python/pyspark/ml/clustering.py b/python/pyspark/ml/clustering.py index ef9822d..ab449bc 100644 --- a/python/pyspark/ml/clustering.py +++ b/python/pyspark/ml/clustering.py @@ -1035,7 +1035,7 @@ class LDA(JavaEstimator, HasFeaturesCol, HasMaxIter, HasSeed, HasCheckpointInter def setOptimizer(self, value): """ Sets the value of :py:attr:`optimizer`. - Currenlty only support 'em' and 'online'. + Currently only support 'em' and 'online'. >>> algo = LDA().setOptimizer("em") >>> algo.getOptimizer() http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/ml/regression.py ---------------------------------------------------------------------- diff --git a/python/pyspark/ml/regression.py b/python/pyspark/ml/regression.py index 564c9f1..513ca5a 100644 --- a/python/pyspark/ml/regression.py +++ b/python/pyspark/ml/regression.py @@ -1375,7 +1375,7 @@ class AFTSurvivalRegressionModel(JavaModel, JavaMLWritable, JavaMLReadable): @since("1.6.0") def scale(self): """ - Model scale paramter. + Model scale parameter. """ return self._call_java("scale") http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/mllib/stat/_statistics.py ---------------------------------------------------------------------- diff --git a/python/pyspark/mllib/stat/_statistics.py b/python/pyspark/mllib/stat/_statistics.py index 937bb15..6e89bfd 100644 --- a/python/pyspark/mllib/stat/_statistics.py +++ b/python/pyspark/mllib/stat/_statistics.py @@ -259,7 +259,7 @@ class Statistics(object): The KS statistic gives us the maximum distance between the ECDF and the CDF. Intuitively if this statistic is large, the - probabilty that the null hypothesis is true becomes small. + probability that the null hypothesis is true becomes small. For specific details of the implementation, please have a look at the Scala documentation. http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/rdd.py ---------------------------------------------------------------------- diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py index d17a8eb..ba39edb 100644 --- a/python/pyspark/rdd.py +++ b/python/pyspark/rdd.py @@ -1360,7 +1360,7 @@ class RDD(object): if len(items) == 0: numPartsToTry = partsScanned * 4 else: - # the first paramter of max is >=1 whenever partsScanned >= 2 + # the first parameter of max is >=1 whenever partsScanned >= 2 numPartsToTry = int(1.5 * num * partsScanned / len(items)) - partsScanned numPartsToTry = min(max(numPartsToTry, 1), partsScanned * 4) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/sql/tests.py ---------------------------------------------------------------------- diff --git a/python/pyspark/sql/tests.py b/python/pyspark/sql/tests.py index ed97a63..91ed600 100644 --- a/python/pyspark/sql/tests.py +++ b/python/pyspark/sql/tests.py @@ -765,7 +765,7 @@ class SQLTests(ReusedSQLTestCase): row2 = df2.select(sameText(df2['file'])).first() self.assertTrue(row2[0].find("people.json") != -1) - def test_udf_defers_judf_initalization(self): + def test_udf_defers_judf_initialization(self): # This is separate of UDFInitializationTests # to avoid context initialization # when udf is called @@ -3613,7 +3613,7 @@ class UDFInitializationTests(unittest.TestCase): if SparkContext._active_spark_context is not None: SparkContext._active_spark_context.stop() - def test_udf_init_shouldnt_initalize_context(self): + def test_udf_init_shouldnt_initialize_context(self): from pyspark.sql.functions import UserDefinedFunction UserDefinedFunction(lambda x: x, StringType()) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/python/pyspark/streaming/tests.py ---------------------------------------------------------------------- diff --git a/python/pyspark/streaming/tests.py b/python/pyspark/streaming/tests.py index 373784f..09af47a 100644 --- a/python/pyspark/streaming/tests.py +++ b/python/pyspark/streaming/tests.py @@ -822,7 +822,7 @@ class StreamingContextTests(PySparkStreamingTestCase): self.ssc = StreamingContext.getActiveOrCreate(None, setupFunc) self.assertTrue(self.setupCalled) - # Verify that getActiveOrCreate() retuns active context and does not call the setupFunc + # Verify that getActiveOrCreate() returns active context and does not call the setupFunc self.ssc.start() self.setupCalled = False self.assertEqual(StreamingContext.getActiveOrCreate(None, setupFunc), self.ssc) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala ---------------------------------------------------------------------- diff --git a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala index bfb7361..b4364a5 100644 --- a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala +++ b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala @@ -117,7 +117,7 @@ private[mesos] object MesosSchedulerBackendUtil extends Logging { case Array(key, value) => Some(param.setKey(key).setValue(value)) case spec => - logWarning(s"Unable to parse arbitary parameters: $params. " + logWarning(s"Unable to parse arbitrary parameters: $params. " + "Expected form: \"key=value(, ...)\"") None } http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala ---------------------------------------------------------------------- diff --git a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala index ecbcc96..8ef1e18 100644 --- a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala +++ b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala @@ -355,7 +355,7 @@ trait MesosSchedulerUtils extends Logging { * https://github.com/apache/mesos/blob/master/src/common/values.cpp * https://github.com/apache/mesos/blob/master/src/common/attributes.cpp * - * @param constraintsVal constaints string consisting of ';' separated key-value pairs (separated + * @param constraintsVal constains string consisting of ';' separated key-value pairs (separated * by ':') * @return Map of constraints to match resources offers. */ http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/resource-managers/mesos/src/test/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala ---------------------------------------------------------------------- diff --git a/resource-managers/mesos/src/test/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala b/resource-managers/mesos/src/test/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala index 33e7d69..057c51d 100644 --- a/resource-managers/mesos/src/test/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala +++ b/resource-managers/mesos/src/test/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArgumentsSuite.scala @@ -23,7 +23,7 @@ import org.apache.spark.deploy.TestPrematureExit class MesosClusterDispatcherArgumentsSuite extends SparkFunSuite with TestPrematureExit { - test("test if spark config args are passed sucessfully") { + test("test if spark config args are passed successfully") { val args = Array[String]("--master", "mesos://localhost:5050", "--conf", "key1=value1", "--conf", "spark.mesos.key2=value2", "--verbose") val conf = new SparkConf() http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala ---------------------------------------------------------------------- diff --git a/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala b/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala index 3b78b88..d67f5d2 100644 --- a/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala +++ b/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala @@ -108,7 +108,7 @@ class YarnClusterSuite extends BaseYarnClusterSuite { "spark.executor.cores" -> "1", "spark.executor.memory" -> "512m", "spark.executor.instances" -> "2", - // Sending some senstive information, which we'll make sure gets redacted + // Sending some sensitive information, which we'll make sure gets redacted "spark.executorEnv.HADOOP_CREDSTORE_PASSWORD" -> YarnClusterDriver.SECRET_PASSWORD, "spark.yarn.appMasterEnv.HADOOP_CREDSTORE_PASSWORD" -> YarnClusterDriver.SECRET_PASSWORD )) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala ---------------------------------------------------------------------- diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala index cd243b8..ee3932c 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala @@ -1060,7 +1060,7 @@ class SessionCatalog( } /** - * overwirte a metastore function in the database specified in `funcDefinition`.. + * overwrite a metastore function in the database specified in `funcDefinition`.. * If no database is specified, assume the function is in the current database. */ def alterFunction(funcDefinition: CatalogFunction): Unit = { http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala ---------------------------------------------------------------------- diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala index 08838d2..f95798d 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala @@ -1345,7 +1345,7 @@ case class ParseToDate(left: Expression, format: Option[Expression], child: Expr } def this(left: Expression) = { - // backwards compatability + // backwards compatibility this(left, None, Cast(left, DateType)) } http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala ---------------------------------------------------------------------- diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala index a6631a8..7ff83a9 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala @@ -74,7 +74,7 @@ case class Project(projectList: Seq[NamedExpression], child: LogicalPlan) * their output. * * @param generator the generator expression - * @param unrequiredChildIndex this paramter starts as Nil and gets filled by the Optimizer. + * @param unrequiredChildIndex this parameter starts as Nil and gets filled by the Optimizer. * It's used as an optimization for omitting data generation that will * be discarded next by a projection. * A common use case is when we explode(array(..)) and are interested http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---------------------------------------------------------------------- diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala index 594952e..dbb5bb4 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala @@ -1371,7 +1371,7 @@ object SQLConf { "mode to keep the same behavior of Spark prior to 2.3. Note that this config doesn't " + "affect Hive serde tables, as they are always overwritten with dynamic mode. This can " + "also be set as an output option for a data source using key partitionOverwriteMode " + - "(which takes precendence over this setting), e.g. " + + "(which takes precedence over this setting), e.g. " + "dataframe.write.option(\"partitionOverwriteMode\", \"dynamic\").save(path)." ) .stringConf http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala ---------------------------------------------------------------------- diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala index a1c976d..94f37f1 100644 --- a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala +++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala @@ -235,7 +235,7 @@ class AnalysisSuite extends AnalysisTest with Matchers { checkAnalysis(plan, expected) } - test("Analysis may leave unnecassary aliases") { + test("Analysis may leave unnecessary aliases") { val att1 = testRelation.output.head var plan = testRelation.select( CreateStruct(Seq(att1, ((att1.as("aa")) + 1).as("a_plus_1"))).as("col"), http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala ---------------------------------------------------------------------- diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala index e068c32..f489d33 100644 --- a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala +++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ConditionalExpressionSuite.scala @@ -209,7 +209,7 @@ class ConditionalExpressionSuite extends SparkFunSuite with ExpressionEvalHelper checkEvaluation(CaseKeyWhen(literalNull, Seq(c2, c5, c1, c6)), null, row) } - test("case key whn - internal pattern matching expects a List while apply takes a Seq") { + test("case key when - internal pattern matching expects a List while apply takes a Seq") { val indexedSeq = IndexedSeq(Literal(1), Literal(42), Literal(42), Literal(1)) val caseKeyWhaen = CaseKeyWhen(Literal(12), indexedSeq) assert(caseKeyWhaen.branches == http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala ---------------------------------------------------------------------- diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala index 4aba76c..2d4c3c1 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala @@ -144,7 +144,7 @@ object StreamingSymmetricHashJoinHelper extends Logging { // Join keys of both sides generate rows of the same fields, that is, same sequence of data - // types. If one side (say left side) has a column (say timestmap) that has a watermark on it, + // types. If one side (say left side) has a column (say timestamp) that has a watermark on it, // then it will never consider joining keys that are < state key watermark (i.e. event time // watermark). On the other side (i.e. right side), even if there is no watermark defined, // there has to be an equivalent column (i.e., timestamp). And any right side data that has the http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/core/src/test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala ---------------------------------------------------------------------- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala index bc95b46..817224d 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/TPCDSQuerySuite.scala @@ -147,7 +147,7 @@ class TPCDSQuerySuite extends BenchmarkQueryTest { |`s_company_id` INT, `s_company_name` STRING, `s_street_number` STRING, |`s_street_name` STRING, `s_street_type` STRING, `s_suite_number` STRING, `s_city` STRING, |`s_county` STRING, `s_state` STRING, `s_zip` STRING, `s_country` STRING, - |`s_gmt_offset` DECIMAL(5,2), `s_tax_precentage` DECIMAL(5,2)) + |`s_gmt_offset` DECIMAL(5,2), `s_tax_percentage` DECIMAL(5,2)) |USING parquet """.stripMargin) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala ---------------------------------------------------------------------- diff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala index 842251b..57e36e0 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala @@ -132,7 +132,7 @@ class CSVInferSchemaSuite extends SparkFunSuite { == StringType) } - test("DoubleType should be infered when user defined nan/inf are provided") { + test("DoubleType should be inferred when user defined nan/inf are provided") { val options = new CSVOptions(Map("nanValue" -> "nan", "negativeInf" -> "-inf", "positiveInf" -> "inf"), false, "GMT") assert(CSVInferSchema.inferField(NullType, "nan", options) == DoubleType) http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java ---------------------------------------------------------------------- diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java index f59cdcd..745f385 100644 --- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java +++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/HiveSessionImpl.java @@ -471,7 +471,7 @@ public class HiveSessionImpl implements HiveSession { opHandleSet.add(opHandle); return opHandle; } catch (HiveSQLException e) { - // Refering to SQLOperation.java,there is no chance that a HiveSQLException throws and the asyn + // Referring to SQLOperation.java, there is no chance that a HiveSQLException throws and the asyn // background operation submits to thread pool successfully at the same time. So, Cleanup // opHandle directly when got HiveSQLException operationManager.closeOperation(opHandle); http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922 ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922 b/sql/hive/src/test/resources/golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922 index 06461b5..967e2d3 100644 --- a/sql/hive/src/test/resources/golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922 +++ b/sql/hive/src/test/resources/golden/udf_instr-1-2e76f819563dbaba4beb51e3a130b922 @@ -1 +1 @@ -instr(str, substr) - Returns the index of the first occurance of substr in str +instr(str, substr) - Returns the index of the first occurrence of substr in str http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/golden/udf_instr-2-32da357fc754badd6e3898dcc8989182 ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/golden/udf_instr-2-32da357fc754badd6e3898dcc8989182 b/sql/hive/src/test/resources/golden/udf_instr-2-32da357fc754badd6e3898dcc8989182 index 5a8c342..0a74534 100644 --- a/sql/hive/src/test/resources/golden/udf_instr-2-32da357fc754badd6e3898dcc8989182 +++ b/sql/hive/src/test/resources/golden/udf_instr-2-32da357fc754badd6e3898dcc8989182 @@ -1,4 +1,4 @@ -instr(str, substr) - Returns the index of the first occurance of substr in str +instr(str, substr) - Returns the index of the first occurrence of substr in str Example: > SELECT instr('Facebook', 'boo') FROM src LIMIT 1; 5 http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e b/sql/hive/src/test/resources/golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e index 84bea32..8e70b0c 100644 --- a/sql/hive/src/test/resources/golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e +++ b/sql/hive/src/test/resources/golden/udf_locate-1-6e41693c9c6dceea4d7fab4c02884e4e @@ -1 +1 @@ -locate(substr, str[, pos]) - Returns the position of the first occurance of substr in str after position pos +locate(substr, str[, pos]) - Returns the position of the first occurrence of substr in str after position pos http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/golden/udf_locate-2-d9b5934457931447874d6bb7c13de478 ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/golden/udf_locate-2-d9b5934457931447874d6bb7c13de478 b/sql/hive/src/test/resources/golden/udf_locate-2-d9b5934457931447874d6bb7c13de478 index 092e125..e103255 100644 --- a/sql/hive/src/test/resources/golden/udf_locate-2-d9b5934457931447874d6bb7c13de478 +++ b/sql/hive/src/test/resources/golden/udf_locate-2-d9b5934457931447874d6bb7c13de478 @@ -1,4 +1,4 @@ -locate(substr, str[, pos]) - Returns the position of the first occurance of substr in str after position pos +locate(substr, str[, pos]) - Returns the position of the first occurrence of substr in str after position pos Example: > SELECT locate('bar', 'foobarbar', 5) FROM src LIMIT 1; 7 http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8 ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8 b/sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8 index 9ced4ee..6caa4b6 100644 --- a/sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8 +++ b/sql/hive/src/test/resources/golden/udf_translate-2-f7aa38a33ca0df73b7a1e6b6da4b7fe8 @@ -6,8 +6,8 @@ translate('abcdef', 'adc', '19') returns '1b9ef' replacing 'a' with '1', 'd' wit translate('a b c d', ' ', '') return 'abcd' removing all spaces from the input string -If the same character is present multiple times in the input string, the first occurence of the character is the one that's considered for matching. However, it is not recommended to have the same character more than once in the from string since it's not required and adds to confusion. +If the same character is present multiple times in the input string, the first occurrence of the character is the one that's considered for matching. However, it is not recommended to have the same character more than once in the from string since it's not required and adds to confusion. For example, -translate('abcdef', 'ada', '192') returns '1bc9ef' replaces 'a' with '1' and 'd' with '9' ignoring the second occurence of 'a' in the from string mapping it to '2' +translate('abcdef', 'ada', '192') returns '1bc9ef' replaces 'a' with '1' and 'd' with '9' ignoring the second occurrence of 'a' in the from string mapping it to '2' http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/annotate_stats_join.q ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/annotate_stats_join.q b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/annotate_stats_join.q index 965b0b7..633150b 100644 --- a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/annotate_stats_join.q +++ b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/annotate_stats_join.q @@ -43,7 +43,7 @@ analyze table loc_orc compute statistics for columns state,locid,zip,year; -- dept_orc - 4 -- loc_orc - 8 --- count distincts for relevant columns (since count distinct values are approximate in some cases count distint values will be greater than number of rows) +-- count distincts for relevant columns (since count distinct values are approximate in some cases count distinct values will be greater than number of rows) -- emp_orc.deptid - 3 -- emp_orc.lastname - 7 -- dept_orc.deptid - 6 http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q index da2e26f..e828977 100644 --- a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q +++ b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q @@ -26,7 +26,7 @@ set hive.optimize.bucketmapjoin.sortedmerge=true; -- Since size is being used to find the big table, the order of the tables in the join does not matter -- The tables are only bucketed and not sorted, the join should not be converted --- Currenly, a join is only converted to a sort-merge join without a hint, automatic conversion to +-- Currently, a join is only converted to a sort-merge join without a hint, automatic conversion to -- bucketized mapjoin is not done explain extended select count(*) FROM bucket_small a JOIN bucket_big b ON a.key = b.key; select count(*) FROM bucket_small a JOIN bucket_big b ON a.key = b.key; http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/avro_partitioned.q ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/avro_partitioned.q b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/avro_partitioned.q index 6fe5117..e4ed719 100644 --- a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/avro_partitioned.q +++ b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/avro_partitioned.q @@ -69,5 +69,5 @@ SELECT * FROM episodes_partitioned WHERE doctor_pt > 6 ORDER BY air_date; SELECT * FROM episodes_partitioned ORDER BY air_date LIMIT 5; -- Fetch w/filter to specific partition SELECT * FROM episodes_partitioned WHERE doctor_pt = 6; --- Fetch w/non-existant partition +-- Fetch w/non-existent partition SELECT * FROM episodes_partitioned WHERE doctor_pt = 7 LIMIT 5; http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/decimal_udf.q ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/decimal_udf.q b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/decimal_udf.q index 0c9f1b8..39d2d24 100644 --- a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/decimal_udf.q +++ b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/decimal_udf.q @@ -22,7 +22,7 @@ SELECT key + (value/2) FROM DECIMAL_UDF; EXPLAIN SELECT key + '1.0' FROM DECIMAL_UDF; SELECT key + '1.0' FROM DECIMAL_UDF; --- substraction +-- subtraction EXPLAIN SELECT key - key FROM DECIMAL_UDF; SELECT key - key FROM DECIMAL_UDF; http://git-wip-us.apache.org/repos/asf/spark/blob/8ec25cd6/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q ---------------------------------------------------------------------- diff --git a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q index 3aeae0d..d677fe6 100644 --- a/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q +++ b/sql/hive/src/test/resources/ql/src/test/queries/clientpositive/groupby2_map_multi_distinct.q @@ -13,7 +13,7 @@ INSERT OVERWRITE TABLE dest1 SELECT substr(src.key,1,1), count(DISTINCT substr(s SELECT dest1.* FROM dest1 ORDER BY key; --- HIVE-5560 when group by key is used in distinct funtion, invalid result are returned +-- HIVE-5560 when group by key is used in distinct function, invalid result are returned EXPLAIN FROM src --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org