[jira] [Created] (SPARK-31157) IDEA开发连接远程Spark集群报错

2020-03-14 Thread Yang Ren (Jira)
Yang Ren created SPARK-31157:


 Summary: IDEA开发连接远程Spark集群报错
 Key: SPARK-31157
 URL: https://issues.apache.org/jira/browse/SPARK-31157
 Project: Spark
  Issue Type: Bug
  Components: Project Infra
Affects Versions: 1.5.1
 Environment: IDEA+Spark1.5.1

Spark1.5.1三节点虚拟机集群
Reporter: Yang Ren
 Fix For: 1.5.1


IDEA开发Spark程序连接远程Spark集群报异常如下:

WARN ReliableDeliverySupervisor: Association with remote system 
[akka.tcp://sparkMaster@192.168.159.129:7077] has failed, address is now gated 
for [5000] ms. Reason: [Disassociated] 
ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread 
Thread[appclient-registration-retry-thread,5,main]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31158) IDEA开发连接远程Spark集群报错

2020-03-14 Thread Yang Ren (Jira)
Yang Ren created SPARK-31158:


 Summary: IDEA开发连接远程Spark集群报错
 Key: SPARK-31158
 URL: https://issues.apache.org/jira/browse/SPARK-31158
 Project: Spark
  Issue Type: Bug
  Components: Project Infra
Affects Versions: 1.5.1
 Environment: IDEA+Spark1.5.1

Spark1.5.1三节点虚拟机集群
Reporter: Yang Ren
 Fix For: 1.5.1


IDEA开发Spark程序连接远程Spark集群报异常如下:

WARN ReliableDeliverySupervisor: Association with remote system 
[akka.tcp://sparkMaster@192.168.159.129:7077] has failed, address is now gated 
for [5000] ms. Reason: [Disassociated] 
ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread 
Thread[appclient-registration-retry-thread,5,main]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31156) DataFrameStatFunctions API is not consistent with respect to Column type

2020-03-14 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-31156:
-
Description: 
Some functions from {{org.apache.spark.sql.DataFrameStatFunctions}} class 
accepts {{org.apache.spark.sql.Column}} as an argument:
 * {{bloomFilter}}
 * {{countMinSketch}}

When the rest of the functions accept only {{String}} (or collections of 
{{String}}'s respectively):
 * {{approxQuantile}}
 * {{corr}}
 * {{cov}}
 * {{crosstab}}
 * {{freqItems}}
 * {{sampleBy}}

  was:
Some functions from {{org.apache.spark.sql.DataFrameStatFunctions}} class 
accepts {{org.apache.spark.sql.Column}} as an argument:
 * {{bloomFilter}}
 * {{countMinSketch}}

When the rest of the functions accept only {{String}} (or collections of 
{{String}}'s respectively):
 * {{ approxQuantile}}
 * {{corr}}
 * {{cov}}
 * {{crosstab}}
 * {{freqItems}}
 * {{sampleBy}}


> DataFrameStatFunctions API is not consistent with respect to Column type
> 
>
> Key: SPARK-31156
> URL: https://issues.apache.org/jira/browse/SPARK-31156
> Project: Spark
>  Issue Type: Improvement
>  Components: Java API
>Affects Versions: 2.4.4
>Reporter: Oleksii Kachaiev
>Priority: Minor
>
> Some functions from {{org.apache.spark.sql.DataFrameStatFunctions}} class 
> accepts {{org.apache.spark.sql.Column}} as an argument:
>  * {{bloomFilter}}
>  * {{countMinSketch}}
> When the rest of the functions accept only {{String}} (or collections of 
> {{String}}'s respectively):
>  * {{approxQuantile}}
>  * {{corr}}
>  * {{cov}}
>  * {{crosstab}}
>  * {{freqItems}}
>  * {{sampleBy}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31149) PySpark job not killing Spark Daemon processes after the executor is killed due to OOM

2020-03-14 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-31149:
-
Fix Version/s: (was: 2.4.5)

> PySpark job not killing Spark Daemon processes after the executor is killed 
> due to OOM
> --
>
> Key: SPARK-31149
> URL: https://issues.apache.org/jira/browse/SPARK-31149
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.4.5
>Reporter: Arsenii Venherak
>Priority: Major
>
> {code:java}
> 2020-03-10 10:15:00,257 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 327523 for container-id container_e25_1583
> 485217113_0347_01_42: 1.9 GB of 2 GB physical memory used; 39.5 GB of 4.2 
> GB virtual memory used
> 2020-03-10 10:15:05,135 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Memory usage of ProcessTree 327523 for container-id container_e25_1583
> 485217113_0347_01_42: 3.6 GB of 2 GB physical memory used; 41.1 GB of 4.2 
> GB virtual memory used
> 2020-03-10 10:15:05,136 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Process tree for container: container_e25_1583485217113_0347_01_42
>  has processes older than 1 iteration running over the configured limit. 
> Limit=2147483648, current usage = 3915513856
> 2020-03-10 10:15:05,136 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  Container [pid=327523,containerID=container_e25_1583485217113_0347_01_
> 42] is running beyond physical memory limits. Current usage: 3.6 GB of 2 
> GB physical memory used; 41.1 GB of 4.2 GB virtual memory used. Killing 
> container.
> Dump of the process-tree for container_e25_1583485217113_0347_01_42 :
> |- 327535 327523 327523 327523 (java) 1611 111 4044427264 172306 
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre/bin/java 
> -server -Xmx1024m -Djava.io.tmpdir=/data/s
> cratch/yarn/usercache/u689299/appcache/application_1583485217113_0347/container_e25_1583485217113_0347_01_42/tmp
>  -Dspark.ssl.trustStore=/opt/mapr/conf/ssl_truststore -Dspark.authenticat
> e.enableSaslEncryption=true -Dspark.driver.port=40653 
> -Dspark.network.timeout=7200 -Dspark.ssl.keyStore=/opt/mapr/conf/ssl_keystore 
> -Dspark.network.sasl.serverAlwaysEncrypt=true -Dspark.ssl
> .enabled=true -Dspark.ssl.protocol=TLSv1.2 -Dspark.ssl.fs.enabled=true 
> -Dspark.ssl.ui.enabled=false -Dspark.authenticate=true 
> -Dspark.yarn.app.container.log.dir=/opt/mapr/hadoop/hadoop-2.7.
> 0/logs/userlogs/application_1583485217113_0347/container_e25_1583485217113_0347_01_42
>  -XX:OnOutOfMemoryError=kill %p 
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://coarsegrainedschedu...@bd02slse0201.wellsfargo.com:40653 
> --executor-id 40 --hostname bd02slsc0519.wellsfargo.com --cores 1 --app-id 
> application_1583485217113_0347 --user-class-path
> file:/data/scratch/yarn/usercache/u689299/appcache/application_1583485217113_0347/container_e25_1583485217113_0347_01_42/__app__.jar
> {code}
>  
>  
> After that, there are lots of pyspark.daemon process left.
>  eg:
>  /apps/anaconda3-5.3.0/bin/python -m pyspark.daemon



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-31151) Reorganize the migration guide of SQL

2020-03-14 Thread Takeshi Yamamuro (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takeshi Yamamuro resolved SPARK-31151.
--
Fix Version/s: 3.0.0
   Resolution: Fixed

Resolved by 
[https://github.com/apache/spark/pull/27909|https://github.com/apache/spark/pull/27909/files]

> Reorganize the migration guide of SQL 
> --
>
> Key: SPARK-31151
> URL: https://issues.apache.org/jira/browse/SPARK-31151
> Project: Spark
>  Issue Type: Documentation
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Xiao Li
>Assignee: Xiao Li
>Priority: Major
> Fix For: 3.0.0
>
>
> The migration guide of SQL is too long and messy. Thus, it is hard to read 
> for most end users. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31156) DataFrameStatFunctions API is not consistent with respect to Column type

2020-03-14 Thread Oleksii Kachaiev (Jira)
Oleksii Kachaiev created SPARK-31156:


 Summary: DataFrameStatFunctions API is not consistent with respect 
to Column type
 Key: SPARK-31156
 URL: https://issues.apache.org/jira/browse/SPARK-31156
 Project: Spark
  Issue Type: Improvement
  Components: Java API
Affects Versions: 2.4.4
Reporter: Oleksii Kachaiev


Some functions from {{org.apache.spark.sql.DataFrameStatFunctions}} class 
accepts {{org.apache.spark.sql.Column}} as an argument:
 * {{bloomFilter}}
 * {{countMinSketch}}

When the rest of the functions accept only {{String}} (or collections of 
{{String}}'s respectively):
 * {{ approxQuantile}}
 * {{corr}}
 * {{cov}}
 * {{crosstab}}
 * {{freqItems}}
 * {{sampleBy}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30432) reduce degree recomputation in StronglyConnectedComponents

2020-03-14 Thread li xiaosen (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

li xiaosen updated SPARK-30432:
---
Affects Version/s: 2.4.5

> reduce degree recomputation in StronglyConnectedComponents
> --
>
> Key: SPARK-30432
> URL: https://issues.apache.org/jira/browse/SPARK-30432
> Project: Spark
>  Issue Type: Improvement
>  Components: GraphX
>Affects Versions: 2.4.5, 3.0.0
>Reporter: li xiaosen
>Priority: Major
>
>  
> So the computation happens every time in the do-while loop, the first time 
> the outer while loop executes. although just once per do-while loop after, it 
> seems, but It does reduce a lot of recomputation;because every time it jump 
> out of the do-while loop,there are no vertices have only out-degree or 
> in-degree,so it's no need to recompute degree to tag the vertices true.
> I have done a small code proposal, because there is a problem when the pregel 
> executions have done,  the degree no need to be recomputed.
>  
> for example,the Email-EuAll  data 
> set:[http://snap.stanford.edu/data/email-EuAll.html]
> do-while loop execute 10 times,and the reduce logic happend 2 times;so it 
> would be helpful when computing StronglyConnectedComponents to reduce degree 
> computation.
>  
> I created a branch in my fork: 
> [https://github.com/xs-li/spark/blob/master/graphx/src/main/scala/org/apache/spark/graphx/lib/StronglyConnectedComponents.scala]
>  
> I hope you can consider this small code proposal.
> Thank you very much,
> Best regards,
> xs-li



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31155) Enable pydocstyle tests

2020-03-14 Thread Nicholas Chammas (Jira)
Nicholas Chammas created SPARK-31155:


 Summary: Enable pydocstyle tests
 Key: SPARK-31155
 URL: https://issues.apache.org/jira/browse/SPARK-31155
 Project: Spark
  Issue Type: Bug
  Components: Build, Documentation
Affects Versions: 3.0.0
Reporter: Nicholas Chammas


pydocstyle tests have been running neither on Jenkins nor on Github.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28556) Error should also be sent to QueryExecutionListener.onFailure

2020-03-14 Thread Shixiong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059259#comment-17059259
 ] 

Shixiong Zhu commented on SPARK-28556:
--

This change has been reverted. See SPARK-31144 for the new fix.

> Error should also be sent to QueryExecutionListener.onFailure
> -
>
> Key: SPARK-28556
> URL: https://issues.apache.org/jira/browse/SPARK-28556
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>Priority: Major
> Fix For: 3.0.0
>
>
> Right now Error is not sent to QueryExecutionListener.onFailure. If there is 
> any Error when running a query, QueryExecutionListener.onFailure cannot be 
> triggered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28556) Error should also be sent to QueryExecutionListener.onFailure

2020-03-14 Thread Shixiong Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-28556:
-
Docs Text:   (was: In Spark 3.0, the type of "error" parameter in the 
"org.apache.spark.sql.util.QueryExecutionListener.onFailure" method is changed 
to "java.lang.Throwable" from "java.lang.Exception" to accept more types of 
failures such as "java.lang.Error" and its subclasses.)

> Error should also be sent to QueryExecutionListener.onFailure
> -
>
> Key: SPARK-28556
> URL: https://issues.apache.org/jira/browse/SPARK-28556
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>Priority: Major
> Fix For: 3.0.0
>
>
> Right now Error is not sent to QueryExecutionListener.onFailure. If there is 
> any Error when running a query, QueryExecutionListener.onFailure cannot be 
> triggered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28556) Error should also be sent to QueryExecutionListener.onFailure

2020-03-14 Thread Shixiong Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shixiong Zhu updated SPARK-28556:
-
Labels:   (was: release-notes)

> Error should also be sent to QueryExecutionListener.onFailure
> -
>
> Key: SPARK-28556
> URL: https://issues.apache.org/jira/browse/SPARK-28556
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: Shixiong Zhu
>Assignee: Shixiong Zhu
>Priority: Major
> Fix For: 3.0.0
>
>
> Right now Error is not sent to QueryExecutionListener.onFailure. If there is 
> any Error when running a query, QueryExecutionListener.onFailure cannot be 
> triggered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31154) Expose basic write metrics for InsertIntoDataSourceCommand

2020-03-14 Thread Lantao Jin (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated SPARK-31154:
---
Description: 
Spark provides interface `InsertableRelation` and the 
`InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
source. Unlike `DataWritingCommand`, the metrics in InsertIntoDataSourceCommand 
is empty and has no chance to update. So we cannot get "number of written 
files" or "number of output rows" from its metrics.

For example, if a table is a Spark parquet table. We can get the writing 
metrics by:
{code}
val df = sql("INSERT INTO TABLE test_table SELECT 1, 'a'")
val numFiles = df.queryExecution.sparkPlan.metrics("numFiles").value
{code}
But if it is a Delta table, we cannot.

  was:
Spark provides interface `InsertableRelation` and the 
`InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
source. Unlike `DataWritingCommand`, the metrics in InsertIntoDataSourceCommand 
is empty and has no chance to update. So we cannot get "number of written 
files" or "number of output rows" from its metrics.

For example, if a table is a Spark parquet table. We can get the writing 
metrics by:
{code}
val df = sql("INSERT INTO TABLE test_table SELECT 1, 'a'")
df.executionP
{code}
But if it is a Delta table, we cannot.


> Expose basic write metrics for InsertIntoDataSourceCommand
> --
>
> Key: SPARK-31154
> URL: https://issues.apache.org/jira/browse/SPARK-31154
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Lantao Jin
>Priority: Major
>
> Spark provides interface `InsertableRelation` and the 
> `InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
> source. Unlike `DataWritingCommand`, the metrics in 
> InsertIntoDataSourceCommand is empty and has no chance to update. So we 
> cannot get "number of written files" or "number of output rows" from its 
> metrics.
> For example, if a table is a Spark parquet table. We can get the writing 
> metrics by:
> {code}
> val df = sql("INSERT INTO TABLE test_table SELECT 1, 'a'")
> val numFiles = df.queryExecution.sparkPlan.metrics("numFiles").value
> {code}
> But if it is a Delta table, we cannot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31154) Expose basic write metrics for InsertIntoDataSourceCommand

2020-03-14 Thread Lantao Jin (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated SPARK-31154:
---
Description: 
Spark provides interface `InsertableRelation` and the 
`InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
source. Unlike `DataWritingCommand`, the metrics in InsertIntoDataSourceCommand 
is empty and has no chance to update. So we cannot get "number of written 
files" or "number of output rows" from its metrics.

For example, if a table is a Spark parquet table. We can get the writing 
metrics by:
{code}
val df = sql("INSERT INTO TABLE test_table SELECT 1, 'a'")
df.executionP
{code}
But if it is a Delta table, we cannot.

  was:Spark provides interface `InsertableRelation` and the 
`InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
source. Unlike `DataWritingCommand`, the metrics in InsertIntoDataSourceCommand 
is empty and has no chance to update. So we cannot get "number of written 
files" or "number of output rows" from its metrics.


> Expose basic write metrics for InsertIntoDataSourceCommand
> --
>
> Key: SPARK-31154
> URL: https://issues.apache.org/jira/browse/SPARK-31154
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Lantao Jin
>Priority: Major
>
> Spark provides interface `InsertableRelation` and the 
> `InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
> source. Unlike `DataWritingCommand`, the metrics in 
> InsertIntoDataSourceCommand is empty and has no chance to update. So we 
> cannot get "number of written files" or "number of output rows" from its 
> metrics.
> For example, if a table is a Spark parquet table. We can get the writing 
> metrics by:
> {code}
> val df = sql("INSERT INTO TABLE test_table SELECT 1, 'a'")
> df.executionP
> {code}
> But if it is a Delta table, we cannot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31154) Expose basic write metrics for InsertIntoDataSourceCommand

2020-03-14 Thread Lantao Jin (Jira)
Lantao Jin created SPARK-31154:
--

 Summary: Expose basic write metrics for InsertIntoDataSourceCommand
 Key: SPARK-31154
 URL: https://issues.apache.org/jira/browse/SPARK-31154
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.0.0, 3.1.0
Reporter: Lantao Jin


Spark provides interface `InsertableRelation` and the 
`InsertIntoDataSourceCommand` to delegate the inserting processing to a data 
source. Unlike `DataWritingCommand`, the metrics in InsertIntoDataSourceCommand 
is empty and has no chance to update. So we cannot get "number of written 
files" or "number of output rows" from its metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31153) Cleanup several failures in lint-python

2020-03-14 Thread Nicholas Chammas (Jira)
Nicholas Chammas created SPARK-31153:


 Summary: Cleanup several failures in lint-python
 Key: SPARK-31153
 URL: https://issues.apache.org/jira/browse/SPARK-31153
 Project: Spark
  Issue Type: Bug
  Components: Build, PySpark
Affects Versions: 3.0.0
Reporter: Nicholas Chammas


Don't understand how this script runs fine on the build server. Perhaps we've 
just been getting lucky?

Will detail the issues on the PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org