GitHub user baifanwudi opened a pull request:

    https://github.com/apache/spark/pull/14505

    Branch 2.0

    ## What changes were proposed in this pull request?
    
    (Please fill in changes proposed in this fix)
    
    
    ## How was this patch tested?
    
    (Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
    
    
    (If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/apache/spark branch-2.0

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/14505.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #14505
    
----
commit ba71cf451efceaa6b454baa51c7a6b7e184d3cb7
Author: Bryan Cutler <[email protected]>
Date:   2016-06-29T12:06:38Z

    [SPARK-16261][EXAMPLES][ML] Fixed incorrect appNames in ML Examples
    
    ## What changes were proposed in this pull request?
    
    Some appNames in ML examples are incorrect, mostly in PySpark but one in 
Scala.  This corrects the names.
    
    ## How was this patch tested?
    Style, local tests
    
    Author: Bryan Cutler <[email protected]>
    
    Closes #13949 from BryanCutler/pyspark-example-appNames-fix-SPARK-16261.
    
    (cherry picked from commit 21385d02a987bcee1198103e447c019f7a769d68)
    Signed-off-by: Nick Pentreath <[email protected]>

commit d96e8c2dd0a9949751d3074b6ab61eee12f5d622
Author: Yanbo Liang <[email protected]>
Date:   2016-06-29T18:20:35Z

    [MINOR][SPARKR] Fix arguments of survreg in SparkR
    
    ## What changes were proposed in this pull request?
    Fix wrong arguments description of ```survreg``` in SparkR.
    
    ## How was this patch tested?
    ```Arguments``` section of ```survreg``` doc before this PR (with wrong 
description for ```path``` and missing ```overwrite```):
    
![image](https://cloud.githubusercontent.com/assets/1962026/16447548/fe7a5ed4-3da1-11e6-8b96-b5bf2083b07e.png)
    
    After this PR:
    
![image](https://cloud.githubusercontent.com/assets/1962026/16447617/368e0b18-3da2-11e6-8277-45640fb11859.png)
    
    Author: Yanbo Liang <[email protected]>
    
    Closes #13970 from yanboliang/spark-16143-followup.
    
    (cherry picked from commit c6a220d756f23ee89a0d1366b20259890c9d67c9)
    Signed-off-by: Xiangrui Meng <[email protected]>

commit 1cde325e29286a8c6631b0b32351994aad7db567
Author: Xin Ren <[email protected]>
Date:   2016-06-29T18:25:00Z

    [SPARK-16140][MLLIB][SPARKR][DOCS] Group k-means method in generated R doc
    
    https://issues.apache.org/jira/browse/SPARK-16140
    
    ## What changes were proposed in this pull request?
    
    Group the R doc of spark.kmeans, predict(KM), summary(KM), 
read/write.ml(KM) under Rd spark.kmeans. The example code was updated.
    
    ## How was this patch tested?
    
    Tested on my local machine
    
    And on my laptop `jekyll build` is failing to build API docs, so here I can 
only show you the html I manually generated from Rd files, with no CSS applied, 
but the doc content should be there.
    
    
![screenshotkmeans](https://cloud.githubusercontent.com/assets/3925641/16403203/c2c9ca1e-3ca7-11e6-9e29-f2164aee75fc.png)
    
    Author: Xin Ren <[email protected]>
    
    Closes #13921 from keypointt/SPARK-16140.
    
    (cherry picked from commit 8c9cd0a7a719ce4286f77f35bb787e2b626a472e)
    Signed-off-by: Xiangrui Meng <[email protected]>

commit edd1905c0fde69025cb6d8d8f15d13d6a6da0e3b
Author: gatorsmile <[email protected]>
Date:   2016-06-29T18:30:49Z

    [SPARK-16236][SQL][FOLLOWUP] Add Path Option back to Load API in 
DataFrameReader
    
    #### What changes were proposed in this pull request?
    In Python API, we have the same issue. Thanks for identifying this issue, 
zsxwing ! Below is an example:
    ```Python
    spark.read.format('json').load('python/test_support/sql/people.json')
    ```
    #### How was this patch tested?
    Existing test cases cover the changes by this PR
    
    Author: gatorsmile <[email protected]>
    
    Closes #13965 from gatorsmile/optionPaths.
    
    (cherry picked from commit 39f2eb1da34f26bf68c535c8e6b796d71a37a651)
    Signed-off-by: Shixiong Zhu <[email protected]>

commit 3cc258efb14ee9a35163daa3fa8f4724507ac4af
Author: Tathagata Das <[email protected]>
Date:   2016-06-29T18:45:57Z

    [SPARK-16256][SQL][STREAMING] Added Structured Streaming Programming Guide
    
    Title defines all.
    
    Author: Tathagata Das <[email protected]>
    
    Closes #13945 from tdas/SPARK-16256.
    
    (cherry picked from commit 64132a14fb7a7255feeb5847a54f541fe551bf23)
    Signed-off-by: Tathagata Das <[email protected]>

commit 809af6d9d7df17f5889ebd8640c189e8d1e143a8
Author: hyukjinkwon <[email protected]>
Date:   2016-06-29T20:32:03Z

    [TRIVIAL] [PYSPARK] Clean up orc compression option as well
    
    ## What changes were proposed in this pull request?
    
    This PR corrects ORC compression option for PySpark as well. I think this 
was missed mistakenly in https://github.com/apache/spark/pull/13948.
    
    ## How was this patch tested?
    
    N/A
    
    Author: hyukjinkwon <[email protected]>
    
    Closes #13963 from HyukjinKwon/minor-orc-compress.
    
    (cherry picked from commit d8a87a3ed211dd08f06eeb9560661b8f11ce82fa)
    Signed-off-by: Davies Liu <[email protected]>

commit a7f66ef62b94cdcf65c3043406fd5fd8d6a584c1
Author: Yin Huai <[email protected]>
Date:   2016-06-29T21:42:58Z

    [SPARK-16301] [SQL] The analyzer rule for resolving using joins should 
respect the case sensitivity setting.
    
    ## What changes were proposed in this pull request?
    The analyzer rule for resolving using joins should respect the case 
sensitivity setting.
    
    ## How was this patch tested?
    New tests in ResolveNaturalJoinSuite
    
    Author: Yin Huai <[email protected]>
    
    Closes #13977 from yhuai/SPARK-16301.
    
    (cherry picked from commit 8b5a8b25b9d29b7d0949d5663c7394b26154a836)
    Signed-off-by: Davies Liu <[email protected]>

commit ef0253ff6d7fb9bf89ef023f2d5864c70d9d792d
Author: Dongjoon Hyun <[email protected]>
Date:   2016-06-29T22:00:41Z

    [SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws 
non-intuitive exception
    
    ## What changes were proposed in this pull request?
    
    This PR allows `emptyDataFrame.write` since the user didn't specify any 
partition columns.
    
    **Before**
    ```scala
    scala> spark.emptyDataFrame.write.parquet("/tmp/t1")
    org.apache.spark.sql.AnalysisException: Cannot use all columns for 
partition columns;
    scala> spark.emptyDataFrame.write.csv("/tmp/t1")
    org.apache.spark.sql.AnalysisException: Cannot use all columns for 
partition columns;
    ```
    
    After this PR, there occurs no exceptions and the created directory has 
only one file, `_SUCCESS`, as expected.
    
    ## How was this patch tested?
    
    Pass the Jenkins tests including updated test cases.
    
    Author: Dongjoon Hyun <[email protected]>
    
    Closes #13730 from dongjoon-hyun/SPARK-16006.
    
    (cherry picked from commit 9b1b3ae771babf127f64898d5dc110721597a760)
    Signed-off-by: Reynold Xin <[email protected]>

commit c4cebd5725e6d8ade8c0a02652e251d04903da72
Author: Eric Liang <[email protected]>
Date:   2016-06-29T22:07:32Z

    [SPARK-16238] Metrics for generated method and class bytecode size
    
    ## What changes were proposed in this pull request?
    
    This extends SPARK-15860 to include metrics for the actual bytecode size of 
janino-generated methods. They can be accessed in the same way as any other 
codahale metric, e.g.
    
    ```
    scala> 
org.apache.spark.metrics.source.CodegenMetrics.METRIC_GENERATED_CLASS_BYTECODE_SIZE.getSnapshot().getValues()
    res7: Array[Long] = Array(532, 532, 532, 542, 1479, 2670, 3585, 3585)
    
    scala> 
org.apache.spark.metrics.source.CodegenMetrics.METRIC_GENERATED_METHOD_BYTECODE_SIZE.getSnapshot().getValues()
    res8: Array[Long] = Array(5, 5, 5, 5, 10, 10, 10, 10, 15, 15, 15, 38, 63, 
79, 88, 94, 94, 94, 132, 132, 165, 165, 220, 220)
    ```
    
    ## How was this patch tested?
    
    Small unit test, also verified manually that the performance impact is 
minimal (<10%). hvanhovell
    
    Author: Eric Liang <[email protected]>
    
    Closes #13934 from ericl/spark-16238.
    
    (cherry picked from commit 23c58653f900bfb71ef2b3186a95ad2562c33969)
    Signed-off-by: Reynold Xin <[email protected]>

commit 011befd2098bf78979cc8e00de1576bf339583b2
Author: Dongjoon Hyun <[email protected]>
Date:   2016-06-29T23:08:10Z

    [SPARK-16228][SQL] HiveSessionCatalog should return `double`-param 
functions for decimal param lookups
    
    ## What changes were proposed in this pull request?
    
    This PR supports a fallback lookup by casting `DecimalType` into 
`DoubleType` for the external functions with `double`-type parameter.
    
    **Reported Error Scenarios**
    ```scala
    scala> sql("select percentile(value, 0.5) from values 1,2,3 T(value)")
    org.apache.spark.sql.AnalysisException: ... No matching method for class 
org.apache.hadoop.hive.ql.udf.UDAFPercentile with (int, decimal(38,18)). 
Possible choices: _FUNC_(bigint, array<double>)  _FUNC_(bigint, double)  ; line 
1 pos 7
    
    scala> sql("select percentile_approx(value, 0.5) from values 1.0,2.0,3.0 
T(value)")
    org.apache.spark.sql.AnalysisException: ... Only a float/double or 
float/double array argument is accepted as parameter 2, but decimal(38,18) was 
passed instead.; line 1 pos 7
    ```
    
    ## How was this patch tested?
    
    Pass the Jenkins tests (including a new testcase).
    
    Author: Dongjoon Hyun <[email protected]>
    
    Closes #13930 from dongjoon-hyun/SPARK-16228.
    
    (cherry picked from commit 2eaabfa4142d4050be2b45fd277ff5c7fa430581)
    Signed-off-by: Reynold Xin <[email protected]>

commit 8da4314735ed55f259642e2977d8d7bf2212474f
Author: Wenchen Fan <[email protected]>
Date:   2016-06-30T00:15:08Z

    [SPARK-16134][SQL] optimizer rules for typed filter
    
    ## What changes were proposed in this pull request?
    
    This PR adds 3 optimizer rules for typed filter:
    
    1. push typed filter down through `SerializeFromObject` and eliminate the 
deserialization in filter condition.
    2. pull typed filter up through `SerializeFromObject` and eliminate the 
deserialization in filter condition.
    3. combine adjacent typed filters and share the deserialized object among 
all the condition expressions.
    
    This PR also adds `TypedFilter` logical plan, to separate it from normal 
filter, so that the concept is more clear and it's easier to write optimizer 
rules.
    
    ## How was this patch tested?
    
    `TypedFilterOptimizationSuite`
    
    Author: Wenchen Fan <[email protected]>
    
    Closes #13846 from cloud-fan/filter.
    
    (cherry picked from commit d063898bebaaf4ec2aad24c3ac70aabdbf97a190)
    Signed-off-by: Cheng Lian <[email protected]>

commit e1bdf1e02483bf513b6e012e8921d440a5efbc11
Author: Cheng Lian <[email protected]>
Date:   2016-06-30T00:17:43Z

    Revert "[SPARK-16134][SQL] optimizer rules for typed filter"
    
    This reverts commit 8da4314735ed55f259642e2977d8d7bf2212474f.

commit b52bd8070dc852b419283f8a14595e42c179d3d0
Author: Dongjoon Hyun <[email protected]>
Date:   2016-06-30T00:29:17Z

    [SPARK-16267][TEST] Replace deprecated `CREATE TEMPORARY TABLE ... USING` 
from testsuites.
    
    ## What changes were proposed in this pull request?
    
    After SPARK-15674, `DDLStrategy` prints out the following deprecation 
messages in the testsuites.
    
    ```
    12:10:53.284 WARN 
org.apache.spark.sql.execution.SparkStrategies$DDLStrategy:
    CREATE TEMPORARY TABLE normal_orc_source USING... is deprecated,
    please use CREATE TEMPORARY VIEW viewName USING... instead
    ```
    
    Total : 40
    - JDBCWriteSuite: 14
    - DDLSuite: 6
    - TableScanSuite: 6
    - ParquetSourceSuite: 5
    - OrcSourceSuite: 2
    - SQLQuerySuite: 2
    - HiveCommandSuite: 2
    - JsonSuite: 1
    - PrunedScanSuite: 1
    - FilteredScanSuite  1
    
    This PR replaces `CREATE TEMPORARY TABLE` with `CREATE TEMPORARY VIEW` in 
order to remove the deprecation messages in the above testsuites except 
`DDLSuite`, `SQLQuerySuite`, `HiveCommandSuite`.
    
    The Jenkins results shows only remaining 10 messages.
    
    
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61422/consoleFull
    
    ## How was this patch tested?
    
    This is a testsuite-only change.
    
    Author: Dongjoon Hyun <[email protected]>
    
    Closes #13956 from dongjoon-hyun/SPARK-16267.
    
    (cherry picked from commit 831a04f5d152d1839c0edfdf65bb728aa5957f16)
    Signed-off-by: Reynold Xin <[email protected]>

commit a54852350346cacae61d851d796bc3a7abd3a048
Author: Cheng Lian <[email protected]>
Date:   2016-06-30T05:50:53Z

    [SPARK-16294][SQL] Labelling support for the include_example Jekyll plugin
    
    ## What changes were proposed in this pull request?
    
    This PR adds labelling support for the `include_example` Jekyll plugin, so 
that we may split a single source file into multiple line blocks with different 
labels, and include them in multiple code snippets in the generated HTML page.
    
    ## How was this patch tested?
    
    Manually tested.
    
    <img width="923" alt="screenshot at jun 29 19-53-21" 
src="https://cloud.githubusercontent.com/assets/230655/16451099/66a76db2-3e33-11e6-84fb-63104c2f0688.png";>
    
    Author: Cheng Lian <[email protected]>
    
    Closes #13972 from liancheng/include-example-with-labels.
    
    (cherry picked from commit bde1d6a61593aeb62370f526542cead94919b0c0)
    Signed-off-by: Xiangrui Meng <[email protected]>

commit 3134f116a3565c3a299fa2e7094acd7304d64280
Author: cody koeninger <[email protected]>
Date:   2016-06-30T06:21:03Z

    [SPARK-12177][STREAMING][KAFKA] Update KafkaDStreams to new Kafka 0.10 
Consumer API
    
    ## What changes were proposed in this pull request?
    
    New Kafka consumer api for the released 0.10 version of Kafka
    
    ## How was this patch tested?
    
    Unit tests, manual tests
    
    Author: cody koeninger <[email protected]>
    
    Closes #11863 from koeninger/kafka-0.9.
    
    (cherry picked from commit dedbceec1ef33ccd88101016de969a1ef3e3e142)
    Signed-off-by: Tathagata Das <[email protected]>

commit c8a7c23054209db5474d96de2a7e2d8a6f8cc0da
Author: Tathagata Das <[email protected]>
Date:   2016-06-30T06:38:19Z

    [SPARK-16256][DOCS] Minor fixes on the Structured Streaming Programming 
Guide
    
    Author: Tathagata Das <[email protected]>
    
    Closes #13978 from tdas/SPARK-16256-1.
    
    (cherry picked from commit 2c3d96134dcc0428983eea087db7e91072215aea)
    Signed-off-by: Tathagata Das <[email protected]>

commit 1d274455cfa45bc63aee6ecf8dbb1f170ee16af2
Author: zlpmichelle <[email protected]>
Date:   2016-06-30T07:50:14Z

    [SPARK-16241][ML] model loading backward compatibility for ml NaiveBayes
    
    ## What changes were proposed in this pull request?
    
    model loading backward compatibility for ml NaiveBayes
    
    ## How was this patch tested?
    
    existing ut and manual test for loading models saved by Spark 1.6.
    
    Author: zlpmichelle <[email protected]>
    
    Closes #13940 from zlpmichelle/naivebayes.
    
    (cherry picked from commit b30a2dc7c50bfb70bd2b57be70530a9a9fa94a7a)
    Signed-off-by: Yanbo Liang <[email protected]>

commit 6a4f4c1d751db9542ba49755e859b55b42be3236
Author: Tathagata Das <[email protected]>
Date:   2016-06-30T10:06:04Z

    [SPARK-12177][TEST] Removed test to avoid compilation issue in scala 2.10
    
    ## What changes were proposed in this pull request?
    
    The commented lines failed scala 2.10 build. This is because of change in 
behavior of case classes between 2.10 and 2.11. In scala 2.10, if companion 
object of a case class has explicitly defined apply(), then the implicit apply 
method is not generated. In scala 2.11 it is generated. Hence, the lines 
compile fine in 2.11 but not in 2.10.
    
    This simply comments the tests to fix broken build. Correct solution is 
pending.
    
    Author: Tathagata Das <[email protected]>
    
    Closes #13992 from tdas/SPARK-12177.
    
    (cherry picked from commit de8ab313e1fe59f849a62e59349224581ff0b40a)
    Signed-off-by: Cheng Lian <[email protected]>

commit 56207fc3b26cdb8cb50ce460eeab32c06a81bb44
Author: Sean Zhong <[email protected]>
Date:   2016-06-30T13:56:34Z

    [SPARK-16071][SQL] Checks size limit when doubling the array size in 
BufferHolder
    
    ## What changes were proposed in this pull request?
    
    This PR Checks the size limit when doubling the array size in BufferHolder 
to avoid integer overflow.
    
    ## How was this patch tested?
    
    Manual test.
    
    Author: Sean Zhong <[email protected]>
    
    Closes #13829 from clockfly/SPARK-16071_2.
    
    (cherry picked from commit 5320adc863ca85b489cef79f156392b9da36e53f)
    Signed-off-by: Wenchen Fan <[email protected]>

commit 98056a1f8683385599f194a4b963769e3342bff3
Author: Tathagata Das <[email protected]>
Date:   2016-06-30T14:10:56Z

    [BUILD] Fix version in poms related to kafka-0-10
    
    self explanatory
    
    Author: Tathagata Das <[email protected]>
    
    Closes #13994 from tdas/SPARK-12177-1.

commit f17ffef38b4749b6b801c198ec207434a4db0c38
Author: Sital Kedia <[email protected]>
Date:   2016-06-30T17:53:18Z

    [SPARK-13850] Force the sorter to Spill when number of elements in th…
    
    Force the sorter to Spill when number of elements in the pointer array 
reach a certain size. This is to workaround the issue of timSort failing on 
large buffer size.
    
    Tested by running a job which was failing without this change due to 
TimSort bug.
    
    Author: Sital Kedia <[email protected]>
    
    Closes #13107 from sitalkedia/fix_TimSort.
    
    (cherry picked from commit 07f46afc733b1718d528a6ea5c0d774f047024fa)
    Signed-off-by: Davies Liu <[email protected]>

commit 03008e049a366bc7a63b3915b42ee50320ac6f34
Author: Tathagata Das <[email protected]>
Date:   2016-06-30T21:01:34Z

    [SPARK-16256][DOCS] Fix window operation diagram
    
    Author: Tathagata Das <[email protected]>
    
    Closes #14001 from tdas/SPARK-16256-2.
    
    (cherry picked from commit 5d00a7bc19ddeb1b5247733b55095a03ee7b1a30)
    Signed-off-by: Tathagata Das <[email protected]>

commit 4dc7d377fba39147d8820a5a2866a2fbcb73db98
Author: petermaxlee <[email protected]>
Date:   2016-06-30T23:49:59Z

    [SPARK-16336][SQL] Suggest doing table refresh upon FileNotFoundException
    
    ## What changes were proposed in this pull request?
    This patch appends a message to suggest users running refresh table or 
reloading data frames when Spark sees a FileNotFoundException due to stale, 
cached metadata.
    
    ## How was this patch tested?
    Added a unit test for this in MetadataCacheSuite.
    
    Author: petermaxlee <[email protected]>
    
    Closes #14003 from petermaxlee/SPARK-16336.
    
    (cherry picked from commit fb41670c9263a89ec233861cc91a19cf1bb19073)
    Signed-off-by: Reynold Xin <[email protected]>

commit 17c7522c8cb8f400408cbdc3b8b1251bbca53eec
Author: Reynold Xin <[email protected]>
Date:   2016-06-30T23:51:11Z

    [SPARK-16313][SQL] Spark should not silently drop exceptions in file listing
    
    ## What changes were proposed in this pull request?
    Spark silently drops exceptions during file listing. This is a very bad 
behavior because it can mask legitimate errors and the resulting plan will 
silently have 0 rows. This patch changes it to not silently drop the errors.
    
    ## How was this patch tested?
    Manually verified.
    
    Author: Reynold Xin <[email protected]>
    
    Closes #13987 from rxin/SPARK-16313.
    
    (cherry picked from commit 3d75a5b2a76eba0855d73476dc2fd579c612d521)
    Signed-off-by: Reynold Xin <[email protected]>

commit d3027c45fbe02752d260aefff9dae707ba5c5d4c
Author: Nick Pentreath <[email protected]>
Date:   2016-07-01T00:52:15Z

    [SPARK-16328][ML][MLLIB][PYSPARK] Add 'asML' and 'fromML' conversion 
methods to PySpark linalg
    
    The move to `ml.linalg` created `asML`/`fromML` utility methods in 
Scala/Java for converting between representations. These are missing in Python, 
this PR adds them.
    
    ## How was this patch tested?
    
    New doctests.
    
    Author: Nick Pentreath <[email protected]>
    
    Closes #13997 from MLnick/SPARK-16328-python-linalg-convert.
    
    (cherry picked from commit dab10516138867b7c4fc6d42168497e82853b539)
    Signed-off-by: Joseph K. Bradley <[email protected]>

commit 79c96c99977b0478c25b13583a3e88cbab541ba6
Author: Nick Pentreath <[email protected]>
Date:   2016-07-01T00:55:14Z

    [SPARK-15643][DOC][ML] Add breaking changes to ML migration guide
    
    This PR adds the breaking changes from 
[SPARK-14810](https://issues.apache.org/jira/browse/SPARK-14810) to the 
migration guide.
    
    ## How was this patch tested?
    
    Built docs locally.
    
    Author: Nick Pentreath <[email protected]>
    
    Closes #13924 from MLnick/SPARK-15643-migration-guide.
    
    (cherry picked from commit 4a981dc870a31d8b90aac5f6cb22884e02f6fbc6)
    Signed-off-by: Joseph K. Bradley <[email protected]>

commit 94d61de9cdb773c7f3e0ed8909eddcbb208afaa9
Author: Reynold Xin <[email protected]>
Date:   2016-07-01T02:02:35Z

    [SPARK-15954][SQL] Disable loading test tables in Python tests
    
    ## What changes were proposed in this pull request?
    This patch introduces a flag to disable loading test tables in 
TestHiveSparkSession and disables that in Python. This fixes an issue in which 
python/run-tests would fail due to failure to load test tables.
    
    Note that these test tables are not used outside of HiveCompatibilitySuite. 
In the long run we should probably decouple the loading of test tables from the 
test Hive setup.
    
    ## How was this patch tested?
    This is a test only change.
    
    Author: Reynold Xin <[email protected]>
    
    Closes #14005 from rxin/SPARK-15954.
    
    (cherry picked from commit 38f4d6f44eaa03bdc703662e4a7be9c09ba86e16)
    Signed-off-by: Reynold Xin <[email protected]>

commit 80a7bff897554ce77fe6bc91d62cff8857892322
Author: WeichenXu <[email protected]>
Date:   2016-06-30T15:00:39Z

    [SPARK-15820][PYSPARK][SQL] Add Catalog.refreshTable into python API
    
    ## What changes were proposed in this pull request?
    
    Add Catalog.refreshTable API into python interface for Spark-SQL.
    
    ## How was this patch tested?
    
    Existing test.
    
    Author: WeichenXu <[email protected]>
    
    Closes #13558 from WeichenXu123/update_python_sql_interface_refreshTable.
    
    (cherry picked from commit 5344bade8efb6f12aa43fbfbbbc2e3c0c7d16d98)
    Signed-off-by: Cheng Lian <[email protected]>

commit cc3c44b1196c4186c0b55e319460524e9b9f865b
Author: Yuhao Yang <[email protected]>
Date:   2016-07-01T02:34:51Z

    [SPARK-14608][ML] transformSchema needs better documentation
    
    ## What changes were proposed in this pull request?
    jira: https://issues.apache.org/jira/browse/SPARK-14608
    PipelineStage.transformSchema currently has minimal documentation. It 
should have more to explain it can:
    check schema
    check parameter interactions
    
    ## How was this patch tested?
    unit test
    
    Author: Yuhao Yang <[email protected]>
    Author: Yuhao Yang <[email protected]>
    
    Closes #12384 from hhbyyh/transformSchemaDoc.
    
    (cherry picked from commit aa6564f37f1d8de77c3b7bfa885000252efffea6)
    Signed-off-by: Joseph K. Bradley <[email protected]>

commit 1932bb683fc11735669c7a4b9e746e2a1dbbcb68
Author: cody koeninger <[email protected]>
Date:   2016-07-01T07:53:36Z

    [SPARK-12177][STREAMING][KAFKA] limit api surface area
    
    ## What changes were proposed in this pull request?
    This is an alternative to the refactoring proposed by 
https://github.com/apache/spark/pull/13996
    
    ## How was this patch tested?
    
    unit tests
    also tested under scala 2.10 via
    mvn -Dscala-2.10
    
    Author: cody koeninger <[email protected]>
    
    Closes #13998 from koeninger/kafka-0-10-refactor.
    
    (cherry picked from commit fbfd0ab9d70f557c38c7bb8e704475bf19adaf02)
    Signed-off-by: Tathagata Das <[email protected]>

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to