[spark] branch master updated (0ec03556 -> afd70a0)

2020-01-09 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0ec03556 [SPARK-30439][SQL] Support non-nullable column in CREATE 
TABLE, ADD COLUMN and ALTER TABLE
 add afd70a0  [SPARK-30480][PYSPARK][TESTS] Fix 'test_memory_limit' on 
pyspark test

No new revisions were added by this update.

Summary of changes:
 python/pyspark/tests/test_worker.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0ec03556 -> afd70a0)

2020-01-09 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0ec03556 [SPARK-30439][SQL] Support non-nullable column in CREATE 
TABLE, ADD COLUMN and ALTER TABLE
 add afd70a0  [SPARK-30480][PYSPARK][TESTS] Fix 'test_memory_limit' on 
pyspark test

No new revisions were added by this update.

Summary of changes:
 python/pyspark/tests/test_worker.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] maropu commented on issue #248: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread GitBox
maropu commented on issue #248: Document the signals for Hive 1.2 and 2.3 
profiles
URL: https://github.com/apache/spark-website/pull/248#issuecomment-572837446
 
 
   oh, the fast merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] dongjoon-hyun commented on issue #248: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread GitBox
dongjoon-hyun commented on issue #248: Document the signals for Hive 1.2 and 
2.3 profiles
URL: https://github.com/apache/spark-website/pull/248#issuecomment-572836853
 
 
   +1, late LGTM. It was all time record. (6 minutes!).  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] HyukjinKwon closed pull request #248: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread GitBox
HyukjinKwon closed pull request #248: Document the signals for Hive 1.2 and 2.3 
profiles
URL: https://github.com/apache/spark-website/pull/248
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark-website] branch asf-site updated: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ad84357  Document the signals for Hive 1.2 and 2.3 profiles
ad84357 is described below

commit ad8435787be874fdd2c0e308acacfa0f1601dc72
Author: HyukjinKwon 
AuthorDate: Fri Jan 10 10:47:20 2020 +0900

Document the signals for Hive 1.2 and 2.3 profiles

This PR documents the signals for Hive 1.2 and 2.3 profiles

Author: HyukjinKwon 

Closes #248 from HyukjinKwon/hive-signals.
---
 developer-tools.md| 2 ++
 site/developer-tools.html | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/developer-tools.md b/developer-tools.md
index 1b98c7b..8ed4141 100644
--- a/developer-tools.md
+++ b/developer-tools.md
@@ -256,6 +256,8 @@ your pull request to change testing behavior. This includes:
 - `[test-hadoop2.7]` - signals to test using Spark's Hadoop 2.7 profile
 - `[test-hadoop3.2]` - signals to test using Spark's Hadoop 3.2 profile
 - `[test-hadoop3.2][test-java11]` - signals to test using Spark's Hadoop 3.2 
profile with JDK 11
+- `[test-hive1.2]` - signals to test using Spark's Hive 1.2 profile
+- `[test-hive2.3]` - signals to test using Spark's Hive 2.3 profile
 
 Binary compatibility
 
diff --git a/site/developer-tools.html b/site/developer-tools.html
index 3ba40b2..c449e20 100644
--- a/site/developer-tools.html
+++ b/site/developer-tools.html
@@ -435,6 +435,8 @@ your pull request to change testing behavior. This 
includes:
   [test-hadoop2.7] - signals to 
test using Sparks Hadoop 2.7 profile
   [test-hadoop3.2] - signals to 
test using Sparks Hadoop 3.2 profile
   [test-hadoop3.2][test-java11] - 
signals to test using Sparks Hadoop 3.2 profile with JDK 11
+  [test-hive1.2] - signals to test 
using Sparks Hive 1.2 profile
+  [test-hive2.3] - signals to test 
using Sparks Hive 2.3 profile
 
 
 Binary compatibility


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] HyukjinKwon commented on issue #248: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread GitBox
HyukjinKwon commented on issue #248: Document the signals for Hive 1.2 and 2.3 
profiles
URL: https://github.com/apache/spark-website/pull/248#issuecomment-572836495
 
 
   Thanks @srowen.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] HyukjinKwon commented on issue #248: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread GitBox
HyukjinKwon commented on issue #248: Document the signals for Hive 1.2 and 2.3 
profiles
URL: https://github.com/apache/spark-website/pull/248#issuecomment-572835533
 
 
   cc @gatorsmile, @dongjoon-hyun, @wangyum 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[GitHub] [spark-website] HyukjinKwon opened a new pull request #248: Document the signals for Hive 1.2 and 2.3 profiles

2020-01-09 Thread GitBox
HyukjinKwon opened a new pull request #248: Document the signals for Hive 1.2 
and 2.3 profiles
URL: https://github.com/apache/spark-website/pull/248
 
 
   This PR documents the signals for Hive 1.2 and 2.3 profiles 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-30439][SQL] Support non-nullable column in CREATE TABLE, ADD COLUMN and ALTER TABLE

2020-01-09 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 0ec03556 [SPARK-30439][SQL] Support non-nullable column in CREATE 
TABLE, ADD COLUMN and ALTER TABLE
0ec03556 is described below

commit 0ec0355611e7ce79599f86862a90611f7cde6227
Author: Wenchen Fan 
AuthorDate: Fri Jan 10 10:34:46 2020 +0900

[SPARK-30439][SQL] Support non-nullable column in CREATE TABLE, ADD COLUMN 
and ALTER TABLE

### What changes were proposed in this pull request?

Allow users to specify NOT NULL in CREATE TABLE and ADD COLUMN column 
definition, and add a new SQL syntax to alter column nullability: ALTER TABLE 
... ALTER COLUMN SET/DROP NOT NULL. This is a SQL standard syntax:
```
 ::=
  ALTER [ COLUMN ]  

 ::=

  | 
  | 
  | 
  | ...

 ::=
  SET NOT NULL

 ::=
  DROP NOT NULL
```

### Why are the changes needed?

Previously we don't support it because the table schema in hive catalog are 
always nullable. Since we have catalog plugin now, it makes more sense to 
support NOT NULL at spark side, and let catalog implementations to decide if 
they support it or not.

### Does this PR introduce any user-facing change?

Yes, this is a new feature

### How was this patch tested?

new tests

Closes #27110 from cloud-fan/nullable.

Authored-by: Wenchen Fan 
Signed-off-by: HyukjinKwon 
---
 .../apache/spark/sql/catalyst/parser/SqlBase.g4|  9 ++-
 .../spark/sql/connector/catalog/TableChange.java   | 69 --
 .../sql/catalyst/analysis/CheckAnalysis.scala  |  9 ++-
 .../sql/catalyst/analysis/ResolveCatalogs.scala| 25 
 .../spark/sql/catalyst/parser/AstBuilder.scala | 65 +++-
 .../sql/catalyst/plans/logical/statements.scala|  2 +
 .../sql/connector/catalog/CatalogV2Util.scala  | 11 ++--
 .../spark/sql/catalyst/parser/DDLParserSuite.scala | 68 -
 .../sql/connector/catalog/TableCatalogSuite.scala  | 21 +--
 .../catalyst/analysis/ResolveSessionCatalog.scala  | 55 ++---
 .../spark/sql/connector/AlterTableTests.scala  | 39 +++-
 .../spark/sql/connector/DataSourceV2SQLSuite.scala | 12 ++--
 .../datasources/v2/V2SessionCatalogSuite.scala | 21 +--
 13 files changed, 273 insertions(+), 133 deletions(-)

diff --git 
a/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 
b/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
index 751c782..287227e 100644
--- 
a/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
+++ 
b/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
@@ -161,6 +161,9 @@ statement
 | ALTER TABLE table=multipartIdentifier
 (ALTER | CHANGE) COLUMN? column=multipartIdentifier
 (TYPE dataType)? commentSpec? colPosition? 
#alterTableColumn
+| ALTER TABLE table=multipartIdentifier
+ALTER COLUMN? column=multipartIdentifier
+setOrDrop=(SET | DROP) NOT NULL
#alterColumnNullability
 | ALTER TABLE table=multipartIdentifier partitionSpec?
 CHANGE COLUMN?
 colName=multipartIdentifier colType colPosition?   
#hiveChangeColumn
@@ -869,7 +872,7 @@ qualifiedColTypeWithPositionList
 ;
 
 qualifiedColTypeWithPosition
-: name=multipartIdentifier dataType commentSpec? colPosition?
+: name=multipartIdentifier dataType (NOT NULL)? commentSpec? colPosition?
 ;
 
 colTypeList
@@ -877,7 +880,7 @@ colTypeList
 ;
 
 colType
-: colName=errorCapturingIdentifier dataType commentSpec?
+: colName=errorCapturingIdentifier dataType (NOT NULL)? commentSpec?
 ;
 
 complexColTypeList
@@ -885,7 +888,7 @@ complexColTypeList
 ;
 
 complexColType
-: identifier ':' dataType commentSpec?
+: identifier ':' dataType (NOT NULL)? commentSpec?
 ;
 
 whenClause
diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableChange.java
 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableChange.java
index 7834399..58a592c 100644
--- 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableChange.java
+++ 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableChange.java
@@ -168,25 +168,22 @@ public interface TableChange {
* @return a TableChange for the update
*/
   static TableChange updateColumnType(String[] fieldNames, DataType 
newDataType) {
-return new UpdateColumnType(fieldNames, newDataType, true);
+return new UpdateColumnType(fieldNames, newDataType);
   }
 
   /**
-   * Create a TableChange for updating the type of a field.
+   * Create a 

[spark] branch master updated (4d23938 -> 1ffa627)

2020-01-09 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4d23938  [MINOR][SQL][TEST-HIVE1.2] Fix scalastyle error due to length 
line in hive-1.2 profile
 add 1ffa627  [SPARK-30416][SQL] Log a warning for deprecated SQL config in 
`set()` and `unset()`

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 43 +-
 .../scala/org/apache/spark/sql/RuntimeConfig.scala | 19 +-
 .../apache/spark/sql/internal/SQLConfSuite.scala   | 32 
 3 files changed, 91 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (4d23938 -> 1ffa627)

2020-01-09 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4d23938  [MINOR][SQL][TEST-HIVE1.2] Fix scalastyle error due to length 
line in hive-1.2 profile
 add 1ffa627  [SPARK-30416][SQL] Log a warning for deprecated SQL config in 
`set()` and `unset()`

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 43 +-
 .../scala/org/apache/spark/sql/RuntimeConfig.scala | 19 +-
 .../apache/spark/sql/internal/SQLConfSuite.scala   | 32 
 3 files changed, 91 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c0e9f9f -> 4d23938)

2020-01-09 Thread shaneknapp
This is an automated email from the ASF dual-hosted git repository.

shaneknapp pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c0e9f9f  [SPARK-30459][SQL] Fix ignoreMissingFiles/ignoreCorruptFiles 
in data source v2
 add 4d23938  [MINOR][SQL][TEST-HIVE1.2] Fix scalastyle error due to length 
line in hive-1.2 profile

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/execution/datasources/orc/OrcFilterSuite.scala| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c0e9f9f -> 4d23938)

2020-01-09 Thread shaneknapp
This is an automated email from the ASF dual-hosted git repository.

shaneknapp pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c0e9f9f  [SPARK-30459][SQL] Fix ignoreMissingFiles/ignoreCorruptFiles 
in data source v2
 add 4d23938  [MINOR][SQL][TEST-HIVE1.2] Fix scalastyle error due to length 
line in hive-1.2 profile

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/execution/datasources/orc/OrcFilterSuite.scala| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f8d5957 -> c0e9f9f)

2020-01-09 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f8d5957  [SPARK-29219][SQL] Introduce SupportsCatalogOptions for 
TableProvider
 add c0e9f9f  [SPARK-30459][SQL] Fix ignoreMissingFiles/ignoreCorruptFiles 
in data source v2

No new revisions were added by this update.

Summary of changes:
 .../datasources/v2/FilePartitionReader.scala   |  9 ++
 .../spark/sql/FileBasedDataSourceSuite.scala   | 37 ++
 2 files changed, 27 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f8d5957 -> c0e9f9f)

2020-01-09 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f8d5957  [SPARK-29219][SQL] Introduce SupportsCatalogOptions for 
TableProvider
 add c0e9f9f  [SPARK-30459][SQL] Fix ignoreMissingFiles/ignoreCorruptFiles 
in data source v2

No new revisions were added by this update.

Summary of changes:
 .../datasources/v2/FilePartitionReader.scala   |  9 ++
 .../spark/sql/FileBasedDataSourceSuite.scala   | 37 ++
 2 files changed, 27 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f8d5957 -> c0e9f9f)

2020-01-09 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f8d5957  [SPARK-29219][SQL] Introduce SupportsCatalogOptions for 
TableProvider
 add c0e9f9f  [SPARK-30459][SQL] Fix ignoreMissingFiles/ignoreCorruptFiles 
in data source v2

No new revisions were added by this update.

Summary of changes:
 .../datasources/v2/FilePartitionReader.scala   |  9 ++
 .../spark/sql/FileBasedDataSourceSuite.scala   | 37 ++
 2 files changed, 27 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-29219][SQL] Introduce SupportsCatalogOptions for TableProvider

2020-01-09 Thread brkyvz
This is an automated email from the ASF dual-hosted git repository.

brkyvz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new f8d5957  [SPARK-29219][SQL] Introduce SupportsCatalogOptions for 
TableProvider
f8d5957 is described below

commit f8d59572b014e5254b0c574b26e101c2e4157bdd
Author: Burak Yavuz 
AuthorDate: Thu Jan 9 11:18:16 2020 -0800

[SPARK-29219][SQL] Introduce SupportsCatalogOptions for TableProvider

### What changes were proposed in this pull request?

This PR introduces `SupportsCatalogOptions` as an interface for 
`TableProvider`. Through `SupportsCatalogOptions`, V2 DataSources can implement 
the two methods `extractIdentifier` and `extractCatalog` to support the 
creation, and existence check of tables without requiring a formal TableCatalog 
implementation.

We currently don't support all SaveModes for DataSourceV2 in 
DataFrameWriter.save. The idea here is that eventually File based tables can be 
written with `DataFrameWriter.save(path)` will create a PathIdentifier where 
the name is `path`, and the V2SessionCatalog will be able to perform FileSystem 
checks at `path` to support ErrorIfExists and Ignore SaveModes.

### Why are the changes needed?

To support all Save modes for V2 data sources with DataFrameWriter. Since 
we can now support table creation, we will be able to provide partitioning 
information when first creating the table as well.

### Does this PR introduce any user-facing change?

Introduces a new interface

### How was this patch tested?

Will add tests once interface is vetted.

Closes #26913 from brkyvz/catalogOptions.

Lead-authored-by: Burak Yavuz 
Co-authored-by: Burak Yavuz 
Signed-off-by: Burak Yavuz 
---
 .../apache/spark/sql/kafka010/KafkaSinkSuite.scala |  13 +-
 .../connector/catalog/SupportsCatalogOptions.java  |  53 +
 .../sql/connector/catalog/CatalogV2Util.scala  |  11 ++
 .../org/apache/spark/sql/DataFrameReader.scala |  21 +-
 .../org/apache/spark/sql/DataFrameWriter.scala | 128 
 .../connector/SupportsCatalogOptionsSuite.scala| 219 +
 .../sql/connector/TestV2SessionCatalogBase.scala   |   5 +
 7 files changed, 406 insertions(+), 44 deletions(-)

diff --git 
a/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
 
b/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
index e2dcd62..5c8c5b1 100644
--- 
a/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
+++ 
b/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
@@ -21,6 +21,7 @@ import java.nio.charset.StandardCharsets.UTF_8
 import java.util.concurrent.atomic.AtomicInteger
 
 import scala.reflect.ClassTag
+import scala.util.Try
 
 import org.apache.kafka.clients.producer.ProducerConfig
 import org.apache.kafka.clients.producer.internals.DefaultPartitioner
@@ -500,7 +501,7 @@ abstract class KafkaSinkBatchSuiteBase extends 
KafkaSinkSuiteBase {
 TestUtils.assertExceptionMsg(ex, "null topic present in the data")
   }
 
-  protected def testUnsupportedSaveModes(msg: (SaveMode) => String): Unit = {
+  protected def testUnsupportedSaveModes(msg: (SaveMode) => Seq[String]): Unit 
= {
 val topic = newTopic()
 testUtils.createTopic(topic)
 val df = Seq[(String, String)](null.asInstanceOf[String] -> 
"1").toDF("topic", "value")
@@ -513,7 +514,10 @@ abstract class KafkaSinkBatchSuiteBase extends 
KafkaSinkSuiteBase {
   .mode(mode)
   .save()
   }
-  TestUtils.assertExceptionMsg(ex, msg(mode))
+  val errorChecks = msg(mode).map(m => 
Try(TestUtils.assertExceptionMsg(ex, m)))
+  if (!errorChecks.exists(_.isSuccess)) {
+fail("Error messages not found in exception trace")
+  }
 }
   }
 
@@ -541,7 +545,7 @@ class KafkaSinkBatchSuiteV1 extends KafkaSinkBatchSuiteBase 
{
   .set(SQLConf.USE_V1_SOURCE_LIST, "kafka")
 
   test("batch - unsupported save modes") {
-testUnsupportedSaveModes((mode) => s"Save mode ${mode.name} not allowed 
for Kafka")
+testUnsupportedSaveModes((mode) => s"Save mode ${mode.name} not allowed 
for Kafka" :: Nil)
   }
 }
 
@@ -552,7 +556,8 @@ class KafkaSinkBatchSuiteV2 extends KafkaSinkBatchSuiteBase 
{
   .set(SQLConf.USE_V1_SOURCE_LIST, "")
 
   test("batch - unsupported save modes") {
-testUnsupportedSaveModes((mode) => s"cannot be written with ${mode.name} 
mode")
+testUnsupportedSaveModes((mode) =>
+  Seq(s"cannot be written with ${mode.name} mode", "does not support 
truncate"))
   }
 
   test("generic - write big data with small producer buffer") {
diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsCatalogOptions.java
 

[spark] branch master updated (94fc0e3 -> c88124a)

2020-01-09 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 94fc0e3  [SPARK-30428][SQL] File source V2: support partition pruning
 add c88124a  [SPARK-30452][ML][PYSPARK] Add predict and numFeatures in 
Python IsotonicRegressionModel

No new revisions were added by this update.

Summary of changes:
 python/pyspark/ml/regression.py | 18 ++
 1 file changed, 18 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (dcdc9a8 -> 94fc0e3)

2020-01-09 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from dcdc9a8  [SPARK-28198][PYTHON][FOLLOW-UP] Run the tests of MAP ITER 
UDF in Jenkins
 add 94fc0e3  [SPARK-30428][SQL] File source V2: support partition pruning

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/v2/avro/AvroScan.scala| 42 +++-
 .../org/apache/spark/sql/avro/AvroSuite.scala  | 74 +-
 .../spark/sql/execution/SparkOptimizer.scala   |  2 +-
 .../datasources/PruneFileSourcePartitions.scala| 71 -
 .../sql/execution/datasources/v2/FileScan.scala| 48 +++---
 .../datasources/v2/TextBasedFileScan.scala |  6 +-
 .../sql/execution/datasources/v2/csv/CSVScan.scala | 20 --
 .../execution/datasources/v2/json/JsonScan.scala   | 20 --
 .../sql/execution/datasources/v2/orc/OrcScan.scala | 16 +++--
 .../datasources/v2/parquet/ParquetScan.scala   | 15 +++--
 .../execution/datasources/v2/text/TextScan.scala   | 19 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 48 +-
 .../sql/execution/datasources/orc/OrcTest.scala| 10 +--
 .../execution/datasources/orc/OrcFilterSuite.scala |  9 ++-
 .../execution/datasources/orc/OrcFilterSuite.scala |  9 ++-
 15 files changed, 323 insertions(+), 86 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (dcdc9a8 -> 94fc0e3)

2020-01-09 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from dcdc9a8  [SPARK-28198][PYTHON][FOLLOW-UP] Run the tests of MAP ITER 
UDF in Jenkins
 add 94fc0e3  [SPARK-30428][SQL] File source V2: support partition pruning

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/v2/avro/AvroScan.scala| 42 +++-
 .../org/apache/spark/sql/avro/AvroSuite.scala  | 74 +-
 .../spark/sql/execution/SparkOptimizer.scala   |  2 +-
 .../datasources/PruneFileSourcePartitions.scala| 71 -
 .../sql/execution/datasources/v2/FileScan.scala| 48 +++---
 .../datasources/v2/TextBasedFileScan.scala |  6 +-
 .../sql/execution/datasources/v2/csv/CSVScan.scala | 20 --
 .../execution/datasources/v2/json/JsonScan.scala   | 20 --
 .../sql/execution/datasources/v2/orc/OrcScan.scala | 16 +++--
 .../datasources/v2/parquet/ParquetScan.scala   | 15 +++--
 .../execution/datasources/v2/text/TextScan.scala   | 19 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 48 +-
 .../sql/execution/datasources/orc/OrcTest.scala| 10 +--
 .../execution/datasources/orc/OrcFilterSuite.scala |  9 ++-
 .../execution/datasources/orc/OrcFilterSuite.scala |  9 ++-
 15 files changed, 323 insertions(+), 86 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org