[jira] [Created] (SPARK-34009) Activate profile 'aarch64' based on OS

2021-01-05 Thread huangtianhua (Jira)
huangtianhua created SPARK-34009:


 Summary: Activate profile 'aarch64' based on OS
 Key: SPARK-34009
 URL: https://issues.apache.org/jira/browse/SPARK-34009
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.0.2
Reporter: huangtianhua


Now we activate profile 'aarch64' by parameter '-Paarch' to build spark on 
aarch64: https://github.com/apache/spark/blob/master/pom.xml#L3369-L3374

There is another way to activate maven profile based on OS settings 
automatically, like:

  aarch64
  
org.openlabtesting.leveldbjni
  
  

  linux
  aarch64

  





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-33273) Fix Flaky Test: ThriftServerQueryTestSuite. subquery_scalar_subquery_scalar_subquery_select_sql

2020-12-13 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-33273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17248707#comment-17248707
 ] 

huangtianhua commented on SPARK-33273:
--

Test scalar-subquery-select.sql is failed on arm64 job, see 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/513/testReport/org.apache.spark.sql.hive.thriftserver/ThriftServerQueryTestSuite/subquery_scalar_subquery_scalar_subquery_select_sql/
  
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/516/testReport/org.apache.spark.sql.hive.thriftserver/ThriftServerQueryTestSuite/subquery_scalar_subquery_scalar_subquery_select_sql/

> Fix Flaky Test: ThriftServerQueryTestSuite. 
> subquery_scalar_subquery_scalar_subquery_select_sql
> ---
>
> Key: SPARK-33273
> URL: https://issues.apache.org/jira/browse/SPARK-33273
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 3.1.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>  Labels: correctness
> Attachments: failures.png
>
>
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/130369/testReport/org.apache.spark.sql.hive.thriftserver/ThriftServerQueryTestSuite/subquery_scalar_subquery_scalar_subquery_select_sql/
> {code}
> [info] - subquery/scalar-subquery/scalar-subquery-select.sql *** FAILED *** 
> (3 seconds, 877 milliseconds)
> [info]   Expected "[1]0   2017-05-04 01:01:0...", but got "[]0
> 2017-05-04 01:01:0..." Result did not match for query #3
> [info]   SELECT (SELECT min(t3d) FROM t3) min_t3d,
> [info]  (SELECT max(t2h) FROM t2) max_t2h
> [info]   FROM   t1
> [info]   WHERE  t1a = 'val1c' (ThriftServerQueryTestSuite.scala:197)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Update commons-crypto to v1.1.0

2020-11-11 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230325#comment-17230325
 ] 

huangtianhua commented on SPARK-32691:
--

The ARM Jenkins CI still fail after this merged, it because we didn't install 
libssl-dev package. Now I think it's ok, let's wait today's Jenkins status.

> Update commons-crypto to v1.1.0
> ---
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 2.4.7, 3.0.0, 3.0.1, 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Assignee: huangtianhua
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-11-05 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17227196#comment-17227196
 ] 

huangtianhua commented on SPARK-32691:
--

We have found the problem, it is take long time to replicate remote over the 
default timeout 120 seconds, so it try again to another executor, but in fact 
the replication is complete, so there are 3 replications total. Then we found 
the progress hang in CryptoRandomFactory.getCryptoRandom(properties), we found 
the jar commons-crypto v1.0.0 doesn't support aarch64, after we change to use 
v1.1.0 then the tests pass and the time is short. 
So I plan to propose a PR to change to use commons-crypto v1.1.0 which support 
aarch64: http://commons.apache.org/proper/commons-crypto/changes-report.html  
https://issues.apache.org/jira/browse/CRYPTO-139

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Assignee: zhengruifeng
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-10-15 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214641#comment-17214641
 ] 

huangtianhua commented on SPARK-32691:
--

So yes it's not related with commit 32517.

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-10-15 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214640#comment-17214640
 ] 

huangtianhua commented on SPARK-32691:
--

I took test to remove the commit 32517 and just modify clusterUrl = 
"local-cluster[2,1,1024]" -> "local-cluster[3,1,1024]") then the tests failed.

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-10-15 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214550#comment-17214550
 ] 

huangtianhua commented on SPARK-32691:
--

[~podongfeng] Thanks, there are twice in success.log

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-10-15 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214492#comment-17214492
 ] 

huangtianhua commented on SPARK-32691:
--


[~dongjoon] And I found the failed tests are not 'with replication as stream'. 
like this 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/435/testReport/junit/org.apache.spark/DistributedSuite/caching_on_disk__replicated_2__encryption___on_/

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-10-15 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214490#comment-17214490
 ] 

huangtianhua commented on SPARK-32691:
--

[~dongjoon] Sorry, but I have tested the case: remove the commit of SPRK-32517 
, and then the tests success. 

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-10-14 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214368#comment-17214368
 ] 

huangtianhua commented on SPARK-32691:
--

[~podongfeng] Seems the tests failed after 
https://issues.apache.org/jira/browse/SPARK-32517 merged, please help me to 
check this, thanks very much.

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: Screen Shot 2020-09-28 at 8.49.04 AM.png, failure.log, 
> success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29422) test failed: org.apache.spark.deploy.history.HistoryServerSuite ajax rendered relative links are prefixed with uiRoot

2020-09-03 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua resolved SPARK-29422.
--
Resolution: Not A Problem

> test failed: org.apache.spark.deploy.history.HistoryServerSuite ajax rendered 
> relative links are prefixed with uiRoot
> -
>
> Key: SPARK-29422
> URL: https://issues.apache.org/jira/browse/SPARK-29422
> Project: Spark
>  Issue Type: Test
>  Components: Deploy
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Test org.apache.spark.deploy.history.HistoryServerSuite ajax rendered 
> relative links are prefixed with uiRoot failed sometimes in our arm instance:
> ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase) 
> *** FAILED ***
> 2 was not greater than 4 (HistoryServerSuite.scala:388)
>  
> It's not reproduced everytime in arm instance. Create this issue to see 
> whether this happen in amplab jenkins.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-09-02 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17189785#comment-17189785
 ] 

huangtianhua commented on SPARK-32691:
--

[~dongjoon], ok, thanks. And if anyone wants to reproduce the failure on ARM, I 
can provide an arm instance:)

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: failure.log, success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29106) Add jenkins arm test for spark

2020-09-01 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188217#comment-17188217
 ] 

huangtianhua edited comment on SPARK-29106 at 9/1/20, 7:28 AM:
---

[~shaneknapp] and [~srowen], sorry to disturb you. There is an issue for spark 
arm64 jenkins, the test 'org.apache.spark.DistributedSuite' failed for several 
days, and I don't know the reason and can't fix it, see 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm, and I 
have filed a jira SPARK-32691, could you have a look for it, I can provide 
arm64 instance if needed, thanks very much. 

ps: seems the issue happended after the commit 
b421bf0196897fa66d6d15ba6461a24b23ac6dd3


was (Author: huangtianhua):
[~shaneknapp] and [~srowen], sorry to disturb you. There is an issue for spark 
arm64 jenkins, the test 'org.apache.spark.DistributedSuite' failed for several 
days, and I don't know the reason and can't fix it, see 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm, and I 
have filed a jira for SPARK-32691, could you have a look for it, I can provide 
arm64 instance if needed, thanks very much.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Assignee: Shane Knapp
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2020-09-01 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188217#comment-17188217
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp] and [~srowen], sorry to disturb you. There is an issue for spark 
arm64 jenkins, the test 'org.apache.spark.DistributedSuite' failed for several 
days, and I don't know the reason and can't fix it, see 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm, and I 
have filed a jira for SPARK-32691, could you have a look for it, I can provide 
arm64 instance if needed, thanks very much.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Assignee: Shane Knapp
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2020-09-01 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188200#comment-17188200
 ] 

huangtianhua commented on SPARK-29106:
--

[~blakegw] Sorry to reply you so late, yes, now we can't download tgz from 
spark.apache.org which built with leveldbjni, like 
spark-3.0.0-bin-hadoop2.7.tgz miss leveldbjni arm64 supporting jar in it. 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Assignee: Shane Knapp
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-08-31 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188113#comment-17188113
 ] 

huangtianhua commented on SPARK-32691:
--

> testOnly *DistributedSuite -- -z "caching in memory and disk, replicated 
> (encryption = on) (with replication as stream)"
I test only for case "caching in memory and disk, replicated (encryption = on) 
(with replication as stream)", it's not fail always.
I am so sorry I can't fix this issue, and the arm jenkins failed for a few 
days, I am uploaded the success.log and failure.log to attach files, so if 
anybody can help to analysis, and I can provide the arm64 instance if need, 
thanks all!

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: failure.log, success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-08-31 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-32691:
-
Attachment: failure.log

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: failure.log, success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-08-31 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-32691:
-
Attachment: success.log

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
> Attachments: failure.log, success.log
>
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-08-31 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187478#comment-17187478
 ] 

huangtianhua commented on SPARK-32691:
--

[~dongjoon] Seems it doesn't with 'with replication as stream', see 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/392/testReport/junit/org.apache.spark/DistributedSuite/caching_on_disk__replicated_2__encryption___on_/

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core, Tests
>Affects Versions: 3.1.0
> Environment: ARM64
>Reporter: huangtianhua
>Priority: Major
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-08-24 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183189#comment-17183189
 ] 

huangtianhua commented on SPARK-32691:
--

[~dongjoon], I took tests for several times locally, the different tests are 
failed. 

> Test org.apache.spark.DistributedSuite failed on arm64 jenkins
> --
>
> Key: SPARK-32691
> URL: https://issues.apache.org/jira/browse/SPARK-32691
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.1.0
>Reporter: huangtianhua
>Priority: Major
>
> Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 
> - caching in memory and disk, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory and disk, serialized, replicated (encryption = on) 
> (with replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> - caching in memory, serialized, replicated (encryption = on) (with 
> replication as stream) *** FAILED ***
>   3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
> ...
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32691) Test org.apache.spark.DistributedSuite failed on arm64 jenkins

2020-08-24 Thread huangtianhua (Jira)
huangtianhua created SPARK-32691:


 Summary: Test org.apache.spark.DistributedSuite failed on arm64 
jenkins
 Key: SPARK-32691
 URL: https://issues.apache.org/jira/browse/SPARK-32691
 Project: Spark
  Issue Type: Test
  Components: Tests
Affects Versions: 3.1.0
Reporter: huangtianhua


Tests of org.apache.spark.DistributedSuite are failed on arm64 jenkins: 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/ 

- caching in memory and disk, replicated (encryption = on) (with 
replication as stream) *** FAILED ***
  3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
- caching in memory and disk, serialized, replicated (encryption = on) 
(with replication as stream) *** FAILED ***
  3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
- caching in memory, serialized, replicated (encryption = on) (with 
replication as stream) *** FAILED ***
  3 did not equal 2; got 3 replicas instead of 2 (DistributedSuite.scala:191)
...




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32517) Add StorageLevel.DISK_ONLY_3

2020-08-23 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17182923#comment-17182923
 ] 

huangtianhua commented on SPARK-32517:
--

Thanks for looking deeply for this. I will file a new jara later.

> Add StorageLevel.DISK_ONLY_3
> 
>
> Key: SPARK-32517
> URL: https://issues.apache.org/jira/browse/SPARK-32517
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.1.0
>
>
> This issue aims to add `StorageLevel.DISK_ONLY_3` as a built-in StorageLevel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32517) Add StorageLevel.DISK_ONLY_3

2020-08-20 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17181072#comment-17181072
 ] 

huangtianhua commented on SPARK-32517:
--

[~dongjoon], hi, arm test of 'org.apache.spark.DistributedSuite' failed after 
this commit merged, see details: 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/378/  I 
took test locally and found the two tests failed(sometimes just one test 
failed) would you please have a look? Thanks very much.

> Add StorageLevel.DISK_ONLY_3
> 
>
> Key: SPARK-32517
> URL: https://issues.apache.org/jira/browse/SPARK-32517
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.1.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.1.0
>
>
> This issue aims to add `StorageLevel.DISK_ONLY_3` as a built-in StorageLevel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-32519) test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for aarch64

2020-08-05 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-32519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua resolved SPARK-32519.
--
Resolution: Fixed

> test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for 
> aarch64
> ---
>
> Key: SPARK-32519
> URL: https://issues.apache.org/jira/browse/SPARK-32519
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> The aarch64 maven job failed after the commit 
> https://github.com/apache/spark/commit/813532d10310027fee9e12680792cee2e1c2b7c7
>merged, see the log 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/353/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/
> I took test in my aarch64 instance, if I reset the commit 
> 813532d10310027fee9e12680792cee2e1c2b7c7 the test is ok. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32519) test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for aarch64

2020-08-05 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17171860#comment-17171860
 ] 

huangtianhua commented on SPARK-32519:
--

It  works on aarch64 platform, thanks all.

> test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for 
> aarch64
> ---
>
> Key: SPARK-32519
> URL: https://issues.apache.org/jira/browse/SPARK-32519
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> The aarch64 maven job failed after the commit 
> https://github.com/apache/spark/commit/813532d10310027fee9e12680792cee2e1c2b7c7
>merged, see the log 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/353/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/
> I took test in my aarch64 instance, if I reset the commit 
> 813532d10310027fee9e12680792cee2e1c2b7c7 the test is ok. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32519) test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for aarch64

2020-08-04 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170581#comment-17170581
 ] 

huangtianhua commented on SPARK-32519:
--

Thanks all. I will test this on aarch64 platform.

> test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for 
> aarch64
> ---
>
> Key: SPARK-32519
> URL: https://issues.apache.org/jira/browse/SPARK-32519
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> The aarch64 maven job failed after the commit 
> https://github.com/apache/spark/commit/813532d10310027fee9e12680792cee2e1c2b7c7
>merged, see the log 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/353/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/
> I took test in my aarch64 instance, if I reset the commit 
> 813532d10310027fee9e12680792cee2e1c2b7c7 the test is ok. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32519) test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for aarch64

2020-08-03 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169797#comment-17169797
 ] 

huangtianhua commented on SPARK-32519:
--

[~gsomogyi]  I am not sure why the commit affects the test fail on aarch64 
platform, could you have a look for it, thanks very much.

> test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for 
> aarch64
> ---
>
> Key: SPARK-32519
> URL: https://issues.apache.org/jira/browse/SPARK-32519
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> The aarch64 maven job failed after the commit 
> https://github.com/apache/spark/commit/813532d10310027fee9e12680792cee2e1c2b7c7
>merged, see the log 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/353/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/
> I took test in my aarch64 instance, if I reset the commit 
> 813532d10310027fee9e12680792cee2e1c2b7c7 the test is ok. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32519) test of org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for aarch64

2020-08-03 Thread huangtianhua (Jira)
huangtianhua created SPARK-32519:


 Summary: test of 
org.apache.spark.sql.kafka010.KafkaSourceStressSuite failed for aarch64
 Key: SPARK-32519
 URL: https://issues.apache.org/jira/browse/SPARK-32519
 Project: Spark
  Issue Type: Test
  Components: Tests
Affects Versions: 3.0.0
Reporter: huangtianhua


The aarch64 maven job failed after the commit 
https://github.com/apache/spark/commit/813532d10310027fee9e12680792cee2e1c2b7c7 
  merged, see the log 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/353/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/

I took test in my aarch64 instance, if I reset the commit 
813532d10310027fee9e12680792cee2e1c2b7c7 the test is ok. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32088) test of pyspark.sql.functions.timestamp_seconds failed if non-american timezone setting

2020-06-30 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17149044#comment-17149044
 ] 

huangtianhua commented on SPARK-32088:
--

the test success by the modification 
https://github.com/apache/spark/pull/28959, [~maxgekk], thanks very much:)

> test of pyspark.sql.functions.timestamp_seconds failed if non-american 
> timezone setting
> ---
>
> Key: SPARK-32088
> URL: https://issues.apache.org/jira/browse/SPARK-32088
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.1.0
>Reporter: huangtianhua
>Assignee: philipse
>Priority: Major
> Fix For: 3.1.0
>
>
> The python test failed for aarch64 job, see 
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/405/console
>  since the commit 
> https://github.com/apache/spark/commit/f0e6d0ec13d9cdadf341d1b976623345bcdb1028#diff-c8de34467c555857b92875bf78bf9d49
>  merged:
> **
> File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/sql/functions.py",
>  line 1435, in pyspark.sql.functions.timestamp_seconds
> Failed example:
> time_df.select(timestamp_seconds(time_df.unix_time).alias('ts')).collect()
> Expected:
> [Row(ts=datetime.datetime(2008, 12, 25, 7, 30))]
> Got:
> [Row(ts=datetime.datetime(2008, 12, 25, 23, 30))]
> **
>1 of   3 in pyspark.sql.functions.timestamp_seconds
> ***Test Failed*** 1 failures.
> But this is not arm64-related issue, I took test on x86 instance with 
> timezone setting of UTC, then the test failed too, so I think the expected 
> datetime is timezone American/**, but seems we have not set the timezone when 
> doing these timezone sensitive python tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-32088) test of pyspark.sql.functions.timestamp_seconds failed if non-american timezone setting

2020-06-24 Thread huangtianhua (Jira)
huangtianhua created SPARK-32088:


 Summary: test of pyspark.sql.functions.timestamp_seconds failed if 
non-american timezone setting
 Key: SPARK-32088
 URL: https://issues.apache.org/jira/browse/SPARK-32088
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 3.1.0
Reporter: huangtianhua


The python test failed for aarch64 job, see 
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/405/console
 since the commit 
https://github.com/apache/spark/commit/f0e6d0ec13d9cdadf341d1b976623345bcdb1028#diff-c8de34467c555857b92875bf78bf9d49
 merged:
**
File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/sql/functions.py",
 line 1435, in pyspark.sql.functions.timestamp_seconds
Failed example:
time_df.select(timestamp_seconds(time_df.unix_time).alias('ts')).collect()
Expected:
[Row(ts=datetime.datetime(2008, 12, 25, 7, 30))]
Got:
[Row(ts=datetime.datetime(2008, 12, 25, 23, 30))]
**
   1 of   3 in pyspark.sql.functions.timestamp_seconds
***Test Failed*** 1 failures.

But this is not arm64-related issue, I took test on x86 instance with timezone 
setting of UTC, then the test failed too, so I think the expected datetime is 
timezone American/**, but seems we have not set the timezone when doing these 
timezone sensitive python tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-30340) Python tests failed on arm64/x86

2020-03-22 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua resolved SPARK-30340.
--
Resolution: Fixed

Seems the tests are success now.

> Python tests failed on arm64/x86
> 
>
> Key: SPARK-30340
> URL: https://issues.apache.org/jira/browse/SPARK-30340
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Jenkins job spark-master-test-python-arm failed after the commit 
> c6ab7165dd11a0a7b8aea4c805409088e9a41a74:
> {code}
> File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
>  line 2790, in __main__.FMClassifier
>  Failed example:
>  model.transform(test0).select("features", "probability").show(10, False)
>  Expected:
>  +--++
> |features|probability|
> +--++
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> |[2.0]|[2.00573870645E-10,0.97994233]|
> +--++
>  Got:
>  +--++
> |features|probability|
> +--++
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> |[2.0]|[2.00573870645E-10,0.97994233]|
> +--++
>  
>  **
>  File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
>  line 2803, in __main__.FMClassifier
>  Failed example:
>  model.factors
>  Expected:
>  DenseMatrix(1, 2, [0.0028, 0.0048], 1)
>  Got:
>  DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
>  **
>  2 of 10 in __main__.FMClassifier
>  ***Test Failed*** 2 failures.
> {code}
>  
> The details see 
> [https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/91/console]
> And seems the tests failed on x86:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115668/console]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115665/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2020-02-14 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17036785#comment-17036785
 ] 

huangtianhua commented on SPARK-29106:
--

The arm maven job failed for these days again, and the reason is same as last 
time, the memory is high even if there is no job running, and there are many 
processes still running after maven jobs executed for some days, see 
[http://paste.openstack.org/show/789561] I killed them and let's see test 
result.

 

 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Assignee: Shane Knapp
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30340) Python tests failed on arm64/x86

2019-12-23 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-30340:
-
Summary: Python tests failed on arm64/x86  (was: Python tests failed on 
arm64 )

> Python tests failed on arm64/x86
> 
>
> Key: SPARK-30340
> URL: https://issues.apache.org/jira/browse/SPARK-30340
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Jenkins job spark-master-test-python-arm failed after the commit 
> c6ab7165dd11a0a7b8aea4c805409088e9a41a74:
> File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
>  line 2790, in __main__.FMClassifier
>  Failed example:
>  model.transform(test0).select("features", "probability").show(10, False)
>  Expected:
>  +--++
> |features|probability|
> +--++
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> |[2.0]|[2.00573870645E-10,0.97994233]|
> +--++
>  Got:
>  +--++
> |features|probability|
> +--++
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> |[2.0]|[2.00573870645E-10,0.97994233]|
> +--++
>  
>  **
>  File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
>  line 2803, in __main__.FMClassifier
>  Failed example:
>  model.factors
>  Expected:
>  DenseMatrix(1, 2, [0.0028, 0.0048], 1)
>  Got:
>  DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
>  **
>  2 of 10 in __main__.FMClassifier
>  ***Test Failed*** 2 failures.
>  
> The details see 
> [https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/91/console]
> And seems the tests failed on x86:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115668/console]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115665/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30340) Python tests failed on arm64

2019-12-23 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-30340:
-
Description: 
Jenkins job spark-master-test-python-arm failed after the commit 
c6ab7165dd11a0a7b8aea4c805409088e9a41a74:

File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2790, in __main__.FMClassifier
 Failed example:
 model.transform(test0).select("features", "probability").show(10, False)
 Expected:
 +--++
|features|probability|

+--++
|[-1.0]|[0.97574736,2.425264676902229E-10]|
|[0.5]|[0.47627851732981163,0.5237214826701884]|
|[1.0]|[5.491554426243495E-4,0.9994508445573757]|
|[2.0]|[2.00573870645E-10,0.97994233]|

+--++
 Got:
 +--++
|features|probability|

+--++
|[-1.0]|[0.97574736,2.425264676902229E-10]|
|[0.5]|[0.47627851732981163,0.5237214826701884]|
|[1.0]|[5.491554426243495E-4,0.9994508445573757]|
|[2.0]|[2.00573870645E-10,0.97994233]|

+--++
 
 **
 File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2803, in __main__.FMClassifier
 Failed example:
 model.factors
 Expected:
 DenseMatrix(1, 2, [0.0028, 0.0048], 1)
 Got:
 DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
 **
 2 of 10 in __main__.FMClassifier
 ***Test Failed*** 2 failures.

 

The details see 
[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/91/console]

And seems the tests failed on x86:

[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115668/console]

[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115665/console]

  was:
Jenkins job spark-master-test-python-arm failed after the commit 
c6ab7165dd11a0a7b8aea4c805409088e9a41a74:

File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2790, in __main__.FMClassifier
 Failed example:
 model.transform(test0).select("features", "probability").show(10, False)
 Expected:
 +-+-+
|features|probability|

+-+-+
|[-1.0]|[0.97574736,2.425264676902229E-10]|
|[0.5]|[0.47627851732981163,0.5237214826701884]|
|[1.0]|[5.491554426243495E-4,0.9994508445573757]|
|[2.0]|[2.00573870645E-10,0.97994233]|

+-+-+
 Got:
 +-+-+
|features|probability|

+-+-+
|[-1.0]|[0.97574736,2.425264676902229E-10]|
|[0.5]|[0.47627851732981163,0.5237214826701884]|
|[1.0]|[5.491554426243495E-4,0.9994508445573757]|
|[2.0]|[2.00573870645E-10,0.97994233]|

+-+-+
 
 **
 File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2803, in __main__.FMClassifier
 Failed example:
 model.factors
 Expected:
 DenseMatrix(1, 2, [0.0028, 0.0048], 1)
 Got:
 DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
 **
 2 of 10 in __main__.FMClassifier
 ***Test Failed*** 2 failures.

 

The details see 
[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/91/console]


> Python tests failed on arm64 
> -
>
> Key: SPARK-30340
> URL: https://issues.apache.org/jira/browse/SPARK-30340
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Jenkins job spark-master-test-python-arm failed after the commit 
> c6ab7165dd11a0a7b8aea4c805409088e9a41a74:
> File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
>  line 2790, in __main__.FMClassifier
>  Failed example:
>  model.transform(test0).select("features", "probability").show(10, False)
>  Expected:
>  +--++
> |features|probability|
> +--++
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> |[2.0]|[2.00573870645E-10,0.97994233]|
> +--++
>  Got:
>  

[jira] [Updated] (SPARK-30340) Python tests failed on arm64

2019-12-23 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-30340:
-
Description: 
Jenkins job spark-master-test-python-arm failed after the commit 
c6ab7165dd11a0a7b8aea4c805409088e9a41a74:

File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2790, in __main__.FMClassifier
 Failed example:
 model.transform(test0).select("features", "probability").show(10, False)
 Expected:
 +-+-+
|features|probability|

+-+-+
|[-1.0]|[0.97574736,2.425264676902229E-10]|
|[0.5]|[0.47627851732981163,0.5237214826701884]|
|[1.0]|[5.491554426243495E-4,0.9994508445573757]|
|[2.0]|[2.00573870645E-10,0.97994233]|

+-+-+
 Got:
 +-+-+
|features|probability|

+-+-+
|[-1.0]|[0.97574736,2.425264676902229E-10]|
|[0.5]|[0.47627851732981163,0.5237214826701884]|
|[1.0]|[5.491554426243495E-4,0.9994508445573757]|
|[2.0]|[2.00573870645E-10,0.97994233]|

+-+-+
 
 **
 File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2803, in __main__.FMClassifier
 Failed example:
 model.factors
 Expected:
 DenseMatrix(1, 2, [0.0028, 0.0048], 1)
 Got:
 DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
 **
 2 of 10 in __main__.FMClassifier
 ***Test Failed*** 2 failures.

 

The details see 
[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/91/console]

  was:
Jenkins job spark-master-test-python-arm failed after the commit 
c6ab7165dd11a0a7b8aea4c805409088e9a41a74:

File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2790, in __main__.FMClassifier
Failed example:
 model.transform(test0).select("features", "probability").show(10, False)
Expected:
 ++--+
 |features|probability |
 ++--+
 |[-1.0] |[0.97574736,2.425264676902229E-10]|
 |[0.5] |[0.47627851732981163,0.5237214826701884] |
 |[1.0] |[5.491554426243495E-4,0.9994508445573757] |
 |[2.0] |[2.00573870645E-10,0.97994233]|
 ++--+
Got:
 ++--+
 |features|probability |
 ++--+
 |[-1.0] |[0.97574736,2.425264676902229E-10]|
 |[0.5] |[0.47627851732981163,0.5237214826701884] |
 |[1.0] |[5.491554426243495E-4,0.9994508445573757] |
 |[2.0] |[2.00573870645E-10,0.97994233]|
 ++--+
 
**
File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2803, in __main__.FMClassifier
Failed example:
 model.factors
Expected:
 DenseMatrix(1, 2, [0.0028, 0.0048], 1)
Got:
 DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
**
 2 of 10 in __main__.FMClassifier
***Test Failed*** 2 failures.


> Python tests failed on arm64 
> -
>
> Key: SPARK-30340
> URL: https://issues.apache.org/jira/browse/SPARK-30340
> Project: Spark
>  Issue Type: Bug
>  Components: ML
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Jenkins job spark-master-test-python-arm failed after the commit 
> c6ab7165dd11a0a7b8aea4c805409088e9a41a74:
> File 
> "/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
>  line 2790, in __main__.FMClassifier
>  Failed example:
>  model.transform(test0).select("features", "probability").show(10, False)
>  Expected:
>  +-+-+
> |features|probability|
> +-+-+
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> |[2.0]|[2.00573870645E-10,0.97994233]|
> +-+-+
>  Got:
>  +-+-+
> |features|probability|
> +-+-+
> |[-1.0]|[0.97574736,2.425264676902229E-10]|
> |[0.5]|[0.47627851732981163,0.5237214826701884]|
> |[1.0]|[5.491554426243495E-4,0.9994508445573757]|
> 

[jira] [Created] (SPARK-30340) Python tests failed on arm64

2019-12-23 Thread huangtianhua (Jira)
huangtianhua created SPARK-30340:


 Summary: Python tests failed on arm64 
 Key: SPARK-30340
 URL: https://issues.apache.org/jira/browse/SPARK-30340
 Project: Spark
  Issue Type: Bug
  Components: ML
Affects Versions: 3.0.0
Reporter: huangtianhua


Jenkins job spark-master-test-python-arm failed after the commit 
c6ab7165dd11a0a7b8aea4c805409088e9a41a74:

File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2790, in __main__.FMClassifier
Failed example:
 model.transform(test0).select("features", "probability").show(10, False)
Expected:
 ++--+
 |features|probability |
 ++--+
 |[-1.0] |[0.97574736,2.425264676902229E-10]|
 |[0.5] |[0.47627851732981163,0.5237214826701884] |
 |[1.0] |[5.491554426243495E-4,0.9994508445573757] |
 |[2.0] |[2.00573870645E-10,0.97994233]|
 ++--+
Got:
 ++--+
 |features|probability |
 ++--+
 |[-1.0] |[0.97574736,2.425264676902229E-10]|
 |[0.5] |[0.47627851732981163,0.5237214826701884] |
 |[1.0] |[5.491554426243495E-4,0.9994508445573757] |
 |[2.0] |[2.00573870645E-10,0.97994233]|
 ++--+
 
**
File 
"/home/jenkins/workspace/spark-master-test-python-arm/python/pyspark/ml/classification.py",
 line 2803, in __main__.FMClassifier
Failed example:
 model.factors
Expected:
 DenseMatrix(1, 2, [0.0028, 0.0048], 1)
Got:
 DenseMatrix(1, 2, [-0.0122, 0.0106], 1)
**
 2 of 10 in __main__.FMClassifier
***Test Failed*** 2 failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-12-12 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995280#comment-16995280
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], thanks, 
[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/77/] 
has finished, it cost about 6 hours:), and please to modify the instance info, 
the OS is Ubuntu 18.04.1 LTS

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-12-03 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987456#comment-16987456
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp],:)

So seems you're back:)  Could you please to add the new arm instance into 
amplab, it has high performance, maven test costs about 5 hours, the instance 
info I have sent to your email few days ago. And if you have any problem please 
contact me, thank you very much! 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-12-03 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987407#comment-16987407
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], ok, thanks, sorry and I found the mvn command of arm test still 
is "./build/mvn test -Paarch64 -Phadoop-2.7 {color:#de350b}-Phive1.2{color} 
-Pyarn -Phive -Phive-thriftserver -Pkinesis-asl -Pmesos" , it should be 
{color:#de350b}-Phive-1.2{color}?

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-12-03 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16986743#comment-16986743
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], now we don't have to install leveldbjni-all manually, the pr has 
been merged [https://github.com/apache/spark/pull/26636], please add profile 
-Paarch64 when running maven commands for arm testing jobs, thanks.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30057) Add a statement of platforms that Spark runs on

2019-11-26 Thread huangtianhua (Jira)
huangtianhua created SPARK-30057:


 Summary: Add a statement of platforms that Spark runs on
 Key: SPARK-30057
 URL: https://issues.apache.org/jira/browse/SPARK-30057
 Project: Spark
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 3.0.0
Reporter: huangtianhua


Now the arm jobs have been integrated into amplab jenkins CI, they run as daily 
job and have been stablely running for few weeks, I will add a statement on 
spark docs to make users get this infomation easily, to tell the platforms that 
spark runs on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-11-26 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982245#comment-16982245
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], and I have a good news about the arm testing instance, now we 
gona to donate arm instance in Singapore region,  I will test first, if we are 
ready will contact you to integreate it. 

So the arm instance in beijing region we mentioned above will be deleted 
(afaik, you havn't add it as jenkins worker, right?) 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-11-25 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982232#comment-16982232
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp],I modified the script /tmp/hudson517681717587319554.sh manually 
on arm instance. Let's wait the result :)

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-11-25 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982078#comment-16982078
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], the arm jobs are failed, I think we should add right profile 
'-Phive-1.2' instead of '-Phive1.2' when mvn test. 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, 
> SparkR-and-pyspark36-testing.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-11-21 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979938#comment-16979938
 ] 

huangtianhua commented on SPARK-27721:
--

Will add a profile to switch to use leveldbjni base on the platform, and I have 
took tests on x86 and aarch64 platforms and it works.

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithSkip(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> 

[jira] [Updated] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-11-21 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-27721:
-
Issue Type: Improvement  (was: Question)

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithSkip(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> 

[jira] [Reopened] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-11-21 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua reopened SPARK-27721:
--

Unfortunately, the repository [https://github.com/fusesource/leveldbjni] is 
unmaintaining after 2013, we tried to contact the committer to deploy an 
aarch64 platform release but no reply, so we made leveldbjni-all.1.8 on aarch64 
platform locally and depoly it as 
org.openlabtesting.leveldbjni:leveldbajni-all.1.8.  We use it in amplab jenkins 
for arm jobs and the tests are ok, see 
[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/] 

Let's add 'aarch64' profile to using 
org.openlabtesting.leveldbjni:leveldbjni-all.1.8 on aarch64 platform, do the 
same thing as hadoop does: https://issues.apache.org/jira/browse/HADOOP-16614

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Question
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> 

[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-11-14 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16974143#comment-16974143
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], the vm is ready, I have build/test in /home/jenkins/spark, and 
because the image of old arm testing instance is too large, so we can't create 
the new instance with the image, we copy the contents of /home/jenkins/ into 
new instance.

And because of the network performance, we cache the local source some about 
"hive-ivy"  into /home/jenkins/hive-ivy-cache, please export the environment 
{color:#de350b}SPARK_VERSIONS_SUITE_IVY_PATH=/home/jenkins/hive-ivy-cache/{color}
 before maven test.  

I will send the details info of the vm to your email later.

Please add it as worker of amplab jenkins, and try to build the two jobs as we 
did before, don't hesitate to contact us if you have any questions, thanks very 
much.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29222) Flaky test: pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence

2019-10-29 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961766#comment-16961766
 ] 

huangtianhua commented on SPARK-29222:
--

[~hyukjin.kwon], the failure is releated with the performance of arm instance, 
now we donate an arm instance to AMPLab and we havn't build the python job yet, 
and we plan to donate some higher performance arm instances to AMPLab next 
month, if the issues happen again then I will create a pr to fix this. Thanks a 
lot.

> Flaky test: 
> pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence
> ---
>
> Key: SPARK-29222
> URL: https://issues.apache.org/jira/browse/SPARK-29222
> Project: Spark
>  Issue Type: Test
>  Components: MLlib, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111237/testReport/]
> {code:java}
> Error Message
> 7 != 10
> StacktraceTraceback (most recent call last):
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 429, in test_parameter_convergence
> self._eventually(condition, catch_assertions=True)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
> raise lastValue
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
> lastValue = condition()
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 425, in condition
> self.assertEqual(len(model_weights), len(batches))
> AssertionError: 7 != 10
>{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-29 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961760#comment-16961760
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], hi, the arm job has built for some days, althought there are 
some failures due to poor performance of arm instance or the network upgrade, 
but the good news is that we got several successes, at least it means that 
spark supports on arm platform :) 

According the discussion 'Deprecate Python < 3.6 in Spark 3.0' I think now we 
can do the python test only for python3.6, it will be more simple for us? So 
how about to add a new arm job for python3.6 tests? 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-24 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959355#comment-16959355
 ] 

huangtianhua commented on SPARK-29106:
--

Notes: Hadoop has supported leveldbjni of aarch64 platform, see 
https://issues.apache.org/jira/browse/HADOOP-16614

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-22 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957508#comment-16957508
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp],there are two small suggestions:
 # we don't have to download and install leveldbjni-all-1.8 in our arm test 
instance, we have installed it and it was there.
 # maybe we can try to use 'mvn clean package ' instead of 'mvn clean 
install '?

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-22 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956781#comment-16956781
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], thank you for clarification. And thanks for your work (y)

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-21 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955990#comment-16955990
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp],I am happy that there are two builds success :). I find the arm 
job now is triggered by 'SCM' change, it's good. I wonder the periodic time. 
Thanks.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-18 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954320#comment-16954320
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], the build 
[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/2/] was 
aborted due to timeout, seems [~bzhaoopenstack] and I made a mistake, we run 
unit tests twice because we install our leveldbjni jar locally already (the 
step2 expected to fail in one or two minutes, but it ran more than 3 hours), we 
have modified the scripts, let's run again when you have time and see the 
result. Note: The tests will fail sometimes because of the vm performance. 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-17 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954195#comment-16954195
 ] 

huangtianhua commented on SPARK-29106:
--

[~shaneknapp], thanks for working on this :)

Yes, I agree with [~bzhaoopenstack], maybe we can provide a shell script that 
do things as following:
 # update spark repo with master? Or there is an other way you could do this?
 # call mvn build or mvn test like './build/mvn clean package -DskipTests 
-Phadoop-2.7 -Pyarn -Phive -Phive-thriftserver -Pkinesis-asl -Pmesos' and 
'./build/mvn test -Phadoop-2.7 -Pyarn -Phive -Phive-thriftserver -Pkinesis-asl 
-Pmesos'

And there is important thing that we installed our leveldbjni-all 1.8 locally, 
which I mentioned before, we released a new jar to support aarch64 platform.

Now the demo ansible script has three steps to run the tests:
 # build spark
 # run tests without install our leveldbjni jar package, an error will raised 
like " [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
java.library.path, no leveldbjni in java.library.path, 
/usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
 
/usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
 cannot open shared object file: No such file or directory (Possible cause: 
can't load AMD 64-bit .so on a AARCH64-bit platform)]" 
 # run tests with our leveldbjni jar which supports aarch64 platform, and then 
the tests will be past.

I think we don't have to run the step2 anymore on community jenkins, right? We 
will remove the step in script.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29222) Flaky test: pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence

2019-10-17 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953515#comment-16953515
 ] 

huangtianhua commented on SPARK-29222:
--

[~hyukjin.kwon],we took test and seems these failed tests depends on the 
performance of the instance, the tests maybe pass once for several times test. 
But it seems everything is ok if we increase default timeout to 120s and 
batchDuration to 3s.

> Flaky test: 
> pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence
> ---
>
> Key: SPARK-29222
> URL: https://issues.apache.org/jira/browse/SPARK-29222
> Project: Spark
>  Issue Type: Test
>  Components: MLlib, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111237/testReport/]
> {code:java}
> Error Message
> 7 != 10
> StacktraceTraceback (most recent call last):
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 429, in test_parameter_convergence
> self._eventually(condition, catch_assertions=True)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
> raise lastValue
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
> lastValue = condition()
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 425, in condition
> self.assertEqual(len(model_weights), len(batches))
> AssertionError: 7 != 10
>{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29222) Flaky test: pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence

2019-10-11 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949869#comment-16949869
 ] 

huangtianhua commented on SPARK-29222:
--

[~hyukjin.kwon], we took test to increase default timeout to 120s and 
batchDuration to 3s, I will check if we need to increase the time a bit more.

> Flaky test: 
> pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence
> ---
>
> Key: SPARK-29222
> URL: https://issues.apache.org/jira/browse/SPARK-29222
> Project: Spark
>  Issue Type: Test
>  Components: MLlib, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111237/testReport/]
> {code:java}
> Error Message
> 7 != 10
> StacktraceTraceback (most recent call last):
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 429, in test_parameter_convergence
> self._eventually(condition, catch_assertions=True)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
> raise lastValue
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
> lastValue = condition()
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 425, in condition
> self.assertEqual(len(model_weights), len(batches))
> AssertionError: 7 != 10
>{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29422) test failed: org.apache.spark.deploy.history.HistoryServerSuite ajax rendered relative links are prefixed with uiRoot

2019-10-11 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29422:
-
Summary: test failed: org.apache.spark.deploy.history.HistoryServerSuite 
ajax rendered relative links are prefixed with uiRoot  (was: Flaky test: 
org.apache.spark.deploy.history.HistoryServerSuite ajax rendered relative links 
are prefixed with uiRoot)

> test failed: org.apache.spark.deploy.history.HistoryServerSuite ajax rendered 
> relative links are prefixed with uiRoot
> -
>
> Key: SPARK-29422
> URL: https://issues.apache.org/jira/browse/SPARK-29422
> Project: Spark
>  Issue Type: Test
>  Components: Deploy
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Test org.apache.spark.deploy.history.HistoryServerSuite ajax rendered 
> relative links are prefixed with uiRoot failed sometimes in our arm instance:
> ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase) 
> *** FAILED ***
> 2 was not greater than 4 (HistoryServerSuite.scala:388)
>  
> It's not reproduced everytime in arm instance. Create this issue to see 
> whether this happen in amplab jenkins.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29422) Flaky test: org.apache.spark.deploy.history.HistoryServerSuite ajax rendered relative links are prefixed with uiRoot

2019-10-09 Thread huangtianhua (Jira)
huangtianhua created SPARK-29422:


 Summary: Flaky test: 
org.apache.spark.deploy.history.HistoryServerSuite ajax rendered relative links 
are prefixed with uiRoot
 Key: SPARK-29422
 URL: https://issues.apache.org/jira/browse/SPARK-29422
 Project: Spark
  Issue Type: Test
  Components: Deploy
Affects Versions: 3.0.0
Reporter: huangtianhua


Test org.apache.spark.deploy.history.HistoryServerSuite ajax rendered relative 
links are prefixed with uiRoot failed sometimes in our arm instance:

ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase) *** 
FAILED ***

2 was not greater than 4 (HistoryServerSuite.scala:388)

 

It's not reproduced everytime in arm instance. Create this issue to see whether 
this happen in amplab jenkins.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-10-09 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

SPARK-28467

[https://github.com/apache/spark/pull/25864]

 

SPARK-29286

[https://github.com/apache/spark/pull/26021]

 

 

  was:
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

SPARK-28467

[https://github.com/apache/spark/pull/25864]

 

SPARK-29286

 

 

 


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: 

[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-10-09 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

SPARK-28467

[https://github.com/apache/spark/pull/25864]

 

SPARK-29286

 

 

 

  was:
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

SPARK-28467

[https://github.com/apache/spark/pull/25864]

 

 

 


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to 

[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-10-09 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

SPARK-28467

[https://github.com/apache/spark/pull/25864]

 

 

 

  was:
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in 

[jira] [Commented] (SPARK-29222) Flaky test: pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence

2019-10-08 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946609#comment-16946609
 ] 

huangtianhua commented on SPARK-29222:
--

The tests specified in -SPARK-29205- failed every time when testing in arm 
instance, and after increasing the timeout and batch time they success, but we 
didn't test 100 times, just several times.  I have no idea about the 
batchDuration of StreamingContext setting, is there a principle? 

> Flaky test: 
> pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence
> ---
>
> Key: SPARK-29222
> URL: https://issues.apache.org/jira/browse/SPARK-29222
> Project: Spark
>  Issue Type: Test
>  Components: MLlib, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111237/testReport/]
> {code:java}
> Error Message
> 7 != 10
> StacktraceTraceback (most recent call last):
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 429, in test_parameter_convergence
> self._eventually(condition, catch_assertions=True)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
> raise lastValue
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
> lastValue = condition()
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 425, in condition
> self.assertEqual(len(model_weights), len(batches))
> AssertionError: 7 != 10
>{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29286) UnicodeDecodeError raised when running python tests on arm instance

2019-10-07 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946422#comment-16946422
 ] 

huangtianhua commented on SPARK-29286:
--

Thanks all. I will test this on arm instance later to check this:)

> UnicodeDecodeError raised when running python tests on arm instance
> ---
>
> Key: SPARK-29286
> URL: https://issues.apache.org/jira/browse/SPARK-29286
> Project: Spark
>  Issue Type: Test
>  Components: PySpark
>Affects Versions: 2.4.4, 3.0.0
>Reporter: huangtianhua
>Assignee: Hyukjin Kwon
>Priority: Major
> Fix For: 2.4.5, 3.0.0
>
>
> Run command 'python/run-tests --python-executables=python2.7,python3.6' on 
> arm instance, then UnicodeDecodeError raised:
> 
> Starting test(python2.7): pyspark.broadcast
> Got an exception while trying to store skipped test output:
> Traceback (most recent call last):
>  File "./python/run-tests.py", line 137, in run_individual_python_test
>  decoded_lines = map(lambda line: line.decode(), iter(per_test_output))
>  File "./python/run-tests.py", line 137, in 
>  decoded_lines = map(lambda line: line.decode(), iter(per_test_output))
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 51: 
> ordinal not in range(128)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29286) UnicodeDecodeError raised when running python tests on arm instance

2019-09-29 Thread huangtianhua (Jira)
huangtianhua created SPARK-29286:


 Summary: UnicodeDecodeError raised when running python tests on 
arm instance
 Key: SPARK-29286
 URL: https://issues.apache.org/jira/browse/SPARK-29286
 Project: Spark
  Issue Type: Test
  Components: PySpark
Affects Versions: 3.0.0
Reporter: huangtianhua


Run command 'python/run-tests --python-executables=python2.7,python3.6' on arm 
instance, then UnicodeDecodeError raised:



Starting test(python2.7): pyspark.broadcast

Got an exception while trying to store skipped test output:
Traceback (most recent call last):
 File "./python/run-tests.py", line 137, in run_individual_python_test
 decoded_lines = map(lambda line: line.decode(), iter(per_test_output))
 File "./python/run-tests.py", line 137, in 
 decoded_lines = map(lambda line: line.decode(), iter(per_test_output))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 51: 
ordinal not in range(128)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29222) Flaky test: pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence

2019-09-29 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16940282#comment-16940282
 ] 

huangtianhua commented on SPARK-29222:
--

About the issues happened in arm instance SPARK-29205, finally, we increase the 
timeout and the batch time, see 
[https://github.com/theopenlab/spark/pull/27/files#diff-f7e50078760ce2d40f35e4c3b9112227]
  and then the tests pass.

> Flaky test: 
> pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests.test_parameter_convergence
> ---
>
> Key: SPARK-29222
> URL: https://issues.apache.org/jira/browse/SPARK-29222
> Project: Spark
>  Issue Type: Test
>  Components: MLlib, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111237/testReport/]
> {code:java}
> Error Message
> 7 != 10
> StacktraceTraceback (most recent call last):
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 429, in test_parameter_convergence
> self._eventually(condition, catch_assertions=True)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
> raise lastValue
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
> lastValue = condition()
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 425, in condition
> self.assertEqual(len(model_weights), len(batches))
> AssertionError: 7 != 10
>{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29205) Pyspark tests failed for suspected performance problem on ARM

2019-09-23 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935578#comment-16935578
 ] 

huangtianhua commented on SPARK-29205:
--

And we found there is a similar issue community faced before: 
[https://github.com/apache/spark/commit/ab76900fedc05df7080c9b6c81d65a3f260c1c26#diff-f7e50078760ce2d40f35e4c3b9112227,]
 if we increase the timeout the tests are pass. 

> Pyspark tests failed for suspected performance problem on ARM
> -
>
> Key: SPARK-29205
> URL: https://issues.apache.org/jira/browse/SPARK-29205
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark
>Affects Versions: 3.0.0
> Environment: OS: Ubuntu16.04
> Arch: aarch64
> Host: Virtual Machine
>Reporter: zhao bo
>Priority: Major
>
> We test the pyspark on ARM VM. But found some test fails, once we change the 
> source code to extend the wait time for making sure those test tasks had 
> finished, then the test will pass.
>  
> The affected test cases including:
> pyspark.mllib.tests.test_streaming_algorithms:StreamingLinearRegressionWithTests.test_parameter_convergence
> pyspark.mllib.tests.test_streaming_algorithms:StreamingLogisticRegressionWithSGDTests.test_convergence
> pyspark.mllib.tests.test_streaming_algorithms:StreamingLogisticRegressionWithSGDTests.test_parameter_accuracy
> pyspark.mllib.tests.test_streaming_algorithms:StreamingLogisticRegressionWithSGDTests.test_training_and_prediction
> The error message about above test fails:
> ==
> FAIL: test_parameter_convergence 
> (pyspark.mllib.tests.test_streaming_algorithms.StreamingLinearRegressionWithTests)
> Test that the model parameters improve with streaming data.
> --
> Traceback (most recent call last):
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 429, in test_parameter_convergen ce
>     self._eventually(condition, catch_assertions=True)
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
>     raise lastValue
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
>     lastValue = condition()
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 425, in condition
>     self.assertEqual(len(model_weights), len(batches))
> AssertionError: 6 != 10
>  
>  
> ==
> FAIL: test_convergence 
> (pyspark.mllib.tests.test_streaming_algorithms.StreamingLogisticRegressionWithSGDTests)
> --
> Traceback (most recent call last):
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 292, in test_convergence
>     self._eventually(condition, 60.0, catch_assertions=True)
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
>     raise lastValue
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
>     lastValue = condition()
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 288, in condition
>     self.assertEqual(len(models), len(input_batches))
> AssertionError: 19 != 20
>  
> ==
> FAIL: test_parameter_accuracy 
> (pyspark.mllib.tests.test_streaming_algorithms.StreamingLogisticRegressionWithSGDTests)
> --
> Traceback (most recent call last):
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 266, in test_parameter_accuracy
>     self._eventually(condition, catch_assertions=True)
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 74, in _eventually
>     raise lastValue
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 65, in _eventually
>     lastValue = condition()
>   File 
> "/usr/local/src/hth/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 263, in condition
>     self.assertAlmostEqual(rel, 0.1, 1)
> AssertionError: 0.21309223935797794 != 0.1 within 1 places
>  
> ==
> FAIL: test_training_and_prediction 
> (pyspark.mllib.tests.test_streaming_algorithms.StreamingLogisticRegressionWithSGDTests)
> 

[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

  was:
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing 

[jira] [Comment Edited] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933001#comment-16933001
 ] 

huangtianhua edited comment on SPARK-29106 at 9/19/19 6:33 AM:
---

[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

PS: The issues found and fixed:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]
  
  


was (Author: huangtianhua):
[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]
  
  

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first 

[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 

And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

  was:
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> ps: the issues found and fixed list:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

 

 

  was:
Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to amplab 
to support arm test for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
SPARK-28770
[https://github.com/apache/spark/pull/25673]
 
SPARK-28519
[https://github.com/apache/spark/pull/25279]
 
SPARK-28433
[https://github.com/apache/spark/pull/25186]

 

 

 


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later.  And we plan test on other stable branches too, and we can integrate 
> them to amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> ps: the issues found and fixed list:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

  was:
Add arm test jobs to amplab jenkins for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 We only have to care about the first one when integrate arm test with amplab 
jenkins.

About the k8s test on arm, we have took test it, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]

 

 

 


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later.  And we plan test on other stable branches too, and we can integrate 
> them to amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> ps: the issues found and fixed list:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to amplab 
to support arm test for spark.

 

  was:Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
amplab to support arm test for spark.


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
> amplab to support arm test for spark.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-09-19 Thread huangtianhua (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-29106:
-
Description: 
Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to amplab 
to support arm test for spark.

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
SPARK-28770
[https://github.com/apache/spark/pull/25673]
 
SPARK-28519
[https://github.com/apache/spark/pull/25279]
 
SPARK-28433
[https://github.com/apache/spark/pull/25186]

 

 

 

  was:
Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to amplab 
to support arm test for spark.

 


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
> amplab to support arm test for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  I think we only have to care about the first one when integrate arm test 
> with amplab jenkins. In fact we have took test for k8s on arm, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later.  And we plan test on other stable branches too, and we can integrate 
> them to amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> ps: the issues found and fixed list:
> SPARK-28770
> [https://github.com/apache/spark/pull/25673]
>  
> SPARK-28519
> [https://github.com/apache/spark/pull/25279]
>  
> SPARK-28433
> [https://github.com/apache/spark/pull/25186]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-09-18 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933014#comment-16933014
 ] 

huangtianhua commented on SPARK-29106:
--

The other important thing is about the leveldbjni 
[https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
 spark depends on leveldbjni-all-1.8 
[https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
 we can see there is no arm64 supporting. So we build an arm64 supporting 
release of leveldbjni see 
[https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
 but we can't modified the spark pom.xml directly with something like 
'property'/'profile' to choose correct jar package on arm or x86 platform, 
because spark depends on some hadoop packages like hadoop-hdfs, the packages 
depend on leveldbjni-all-1.8 too, unless hadoop release with new arm supporting 
leveldbjni jar. Now we download the leveldbjni-al-1.8 of openlabtesting and 
'mvn install' to use it when arm testing for spark.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
> amplab to support arm test for spark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29106) Add jenkins arm test for spark

2019-09-18 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933001#comment-16933001
 ] 

huangtianhua edited comment on SPARK-29106 at 9/19/19 2:34 AM:
---

[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7(similar with QA test of amplab jenkins), other one is 
based on a new branch which we made on date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
 [https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]
  
  


was (Author: huangtianhua):
[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7, other one is based on a new branch which we made on 
date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
[https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]
  
  

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
> amplab to support arm test for spark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29106) Add jenkins arm test for spark

2019-09-18 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933001#comment-16933001
 ] 

huangtianhua edited comment on SPARK-29106 at 9/19/19 2:31 AM:
---

[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7, other one is based on a new branch which we made on 
date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
 SPARK-28770
[https://github.com/apache/spark/pull/25673]
  
 SPARK-28519
 [https://github.com/apache/spark/pull/25279]
  
 SPARK-28433
 [https://github.com/apache/spark/pull/25186]
  
  


was (Author: huangtianhua):
[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7, other one is based on a new branch which we made on 
date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
[{color:#172b4d}{color}|https://github.com/apache/spark/pull/25186]SPARK-28770
[|https://github.com/apache/spark/pull/25186] 
[https://github.com/apache/spark/pull/25673]
 
SPARK-28519
[https://github.com/apache/spark/pull/25279]
 
SPARK-28433
[https://github.com/apache/spark/pull/25186]
 
 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
> amplab to support arm test for spark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-09-18 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933001#comment-16933001
 ] 

huangtianhua commented on SPARK-29106:
--

[~dongjoon], thanks :)

Till now we made two arm test periodic jobs for spark in OpenLab, one is based 
on master with hadoop 2.7, other one is based on a new branch which we made on 
date 09-09, see  
[http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
  and 
[http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64,|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
 I think we only have to care about the first one when integrate arm test with 
amplab jenkins. In fact we have took test for k8s on arm, see 
[https://github.com/theopenlab/spark/pull/17], maybe we can integrate it later. 
 And we plan test on other stable branches too, and we can integrate them to 
amplab when they are ready.

We have offered an arm instance and sent the infos to shane knapp, thanks shane 
to add the first arm job to amplab jenkins :) 

ps: the issues found and fixed list:
[{color:#172b4d}{color}|https://github.com/apache/spark/pull/25186]SPARK-28770
[|https://github.com/apache/spark/pull/25186] 
[https://github.com/apache/spark/pull/25673]
 
SPARK-28519
[https://github.com/apache/spark/pull/25279]
 
SPARK-28433
[https://github.com/apache/spark/pull/25186]
 
 

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to 
> amplab to support arm test for spark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29106) Add jenkins arm test for spark

2019-09-16 Thread huangtianhua (Jira)
huangtianhua created SPARK-29106:


 Summary: Add jenkins arm test for spark
 Key: SPARK-29106
 URL: https://issues.apache.org/jira/browse/SPARK-29106
 Project: Spark
  Issue Type: Test
  Components: Tests
Affects Versions: 2.4.4
Reporter: huangtianhua


Add arm test jobs to amplab jenkins. OpenLab will offer arm instances to amplab 
to support arm test for spark.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28770) Flaky Tests: Test ReplayListenerSuite.End-to-end replay with compression failed

2019-09-01 Thread huangtianhua (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920587#comment-16920587
 ] 

huangtianhua commented on SPARK-28770:
--

[~wypoon], thank you for looking into this, so you suggest to delete the 
compare of SparkListenerStageExecutorMetrics events for the two failed tests? 

> Flaky Tests: Test ReplayListenerSuite.End-to-end replay with compression 
> failed
> ---
>
> Key: SPARK-28770
> URL: https://issues.apache.org/jira/browse/SPARK-28770
> Project: Spark
>  Issue Type: Test
>  Components: Spark Core
>Affects Versions: 2.4.3
> Environment: Community jenkins and our arm testing instance.
>Reporter: huangtianhua
>Priority: Major
>
> Test
> org.apache.spark.scheduler.ReplayListenerSuite.End-to-end replay with 
> compression is failed  see 
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-3.2/267/testReport/junit/org.apache.spark.scheduler/ReplayListenerSuite/End_to_end_replay_with_compression/]
>  
> And also the test is failed on arm instance, I sent email to spark-dev 
> before, and we suspect there is something related with the commit 
> [https://github.com/apache/spark/pull/23767], we tried to revert it and the 
> tests are passed:
> ReplayListenerSuite:
>        - ...
>        - End-to-end replay *** FAILED ***
>          "[driver]" did not equal "[1]" (JsonProtocolSuite.scala:622)
>        - End-to-end replay with compression *** FAILED ***
>          "[driver]" did not equal "[1]" (JsonProtocolSuite.scala:622) 
>  
> Not sure what's wrong, hope someone can help to figure it out, thanks very 
> much.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28770) Test ReplayListenerSuite.End-to-end replay with compression failed

2019-08-18 Thread huangtianhua (JIRA)
huangtianhua created SPARK-28770:


 Summary: Test ReplayListenerSuite.End-to-end replay with 
compression failed
 Key: SPARK-28770
 URL: https://issues.apache.org/jira/browse/SPARK-28770
 Project: Spark
  Issue Type: Test
  Components: Spark Core
Affects Versions: 2.4.3
 Environment: Community jenkins and our arm testing instance.
Reporter: huangtianhua


Test

org.apache.spark.scheduler.ReplayListenerSuite.End-to-end replay with 
compression is failed  see 
[https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-3.2/267/testReport/junit/org.apache.spark.scheduler/ReplayListenerSuite/End_to_end_replay_with_compression/]

 

And also the test is failed on arm instance, I sent email to spark-dev before, 
and we suspect there is something related with the commit 
[https://github.com/apache/spark/pull/23767], we tried to revert it and the 
tests are passed:

ReplayListenerSuite:
       - ...
       - End-to-end replay *** FAILED ***
         "[driver]" did not equal "[1]" (JsonProtocolSuite.scala:622)
       - End-to-end replay with compression *** FAILED ***
         "[driver]" did not equal "[1]" (JsonProtocolSuite.scala:622) 

 

Not sure what's wrong, hope someone can help to figure it out, thanks very much.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28519) Tests failed on aarch64 due the value of math.log and power function is different

2019-07-28 Thread huangtianhua (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894876#comment-16894876
 ] 

huangtianhua commented on SPARK-28519:
--

Sorry, I didn't see you have proposed pr, thank you very much.

> Tests failed on aarch64 due the value of math.log and power function is 
> different
> -
>
> Key: SPARK-28519
> URL: https://issues.apache.org/jira/browse/SPARK-28519
> Project: Spark
>  Issue Type: Test
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Sorry to disturb again, we ran unit tests on arm64 instance, and there are 
> other sql tests failed:
> {code}
>  - pgSQL/float8.sql *** FAILED ***
>  Expected "{color:#f691b2}0.549306144334054[9]{color}", but got 
> "{color:#f691b2}0.549306144334054[8]{color}" Result did not match for query 
> #56
>  SELECT atanh(double('0.5')) (SQLQueryTestSuite.scala:362)
>  - pgSQL/numeric.sql *** FAILED ***
>  Expected "2 {color:#59afe1}2247902679199174[72{color} 
> 224790267919917955.1326161858
>  4 7405685069595001 7405685069594999.0773399947
>  5 5068226527.321263 5068226527.3212726541
>  6 281839893606.99365 281839893606.9937234336
>  7 {color:#d04437}1716699575118595840{color} 1716699575118597095.4233081991
>  8 167361463828.0749 167361463828.0749132007
>  9 {color:#14892c}107511333880051856]{color} 107511333880052007", but got 
> "2 {color:#59afe1}2247902679199174[40{color} 224790267919917955.1326161858
>  4 7405685069595001 7405685069594999.0773399947
>  5 5068226527.321263 5068226527.3212726541
>  6 281839893606.99365 281839893606.9937234336
>  7 {color:#d04437}1716699575118595580{color} 1716699575118597095.4233081991
>  8 167361463828.0749 167361463828.0749132007
>  9 {color:#14892c}107511333880051872]{color} 107511333880052007" Result 
> did not match for query #496
>  SELECT t1.id1, t1.result, t2.expected
>  FROM num_result t1, num_exp_power_10_ln t2
>  WHERE t1.id1 = t2.id
>  AND t1.result != t2.expected (SQLQueryTestSuite.scala:362)
> {code}
> The first test failed, because the value of math.log(3.0) is different on 
> aarch64:
> # on x86_64:
> {code}
> scala> math.log(3.0)
> res50: Double = 1.0986122886681098
> {code}
> # on aarch64:
> {code}
> scala> math.log(3.0)
> res19: Double = 1.0986122886681096
> {code}
> And I tried {{math.log(4.0)}}, {{math.log(5.0)}} and they are same, I don't 
> know why {{math.log(3.0)}} is so special? But the result is different indeed 
> on aarch64.
> The second test failed, because some values of pow() is different on aarch64, 
> according to the test, I took tests on aarch64 and x86_64, take '-83028485' 
> as example:
> # on x86_64:
> {code}
> scala> import java.lang.Math._
> import java.lang.Math._
> scala> abs(-83028485)
> res3: Int = 83028485
> scala> var a = -83028485
> a: Int = -83028485
> scala> abs(a)
> res4: Int = 83028485
> scala> math.log(abs(a))
> res5: Double = 18.234694299654787
> scala> pow(10, math.log(abs(a)))
> res6: Double ={color:#d04437} 1.71669957511859584E18{color}
> {code}
> # on aarch64:
> {code}
> scala> var a = -83028485
> a: Int = -83028485
> scala> abs(a)
> res38: Int = 83028485
> scala> math.log(abs(a))
> res39: Double = 18.234694299654787
> scala> pow(10, math.log(abs(a)))
> res40: Double = 1.71669957511859558E18
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28519) Tests failed on aarch64 due the value of math.log and power function is different

2019-07-28 Thread huangtianhua (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894854#comment-16894854
 ] 

huangtianhua commented on SPARK-28519:
--

Thank you all. I will test with modification and to see whether there are other 
similar tests fail, and will address them togother in one pull request.

> Tests failed on aarch64 due the value of math.log and power function is 
> different
> -
>
> Key: SPARK-28519
> URL: https://issues.apache.org/jira/browse/SPARK-28519
> Project: Spark
>  Issue Type: Test
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Sorry to disturb again, we ran unit tests on arm64 instance, and there are 
> other sql tests failed:
> {code}
>  - pgSQL/float8.sql *** FAILED ***
>  Expected "{color:#f691b2}0.549306144334054[9]{color}", but got 
> "{color:#f691b2}0.549306144334054[8]{color}" Result did not match for query 
> #56
>  SELECT atanh(double('0.5')) (SQLQueryTestSuite.scala:362)
>  - pgSQL/numeric.sql *** FAILED ***
>  Expected "2 {color:#59afe1}2247902679199174[72{color} 
> 224790267919917955.1326161858
>  4 7405685069595001 7405685069594999.0773399947
>  5 5068226527.321263 5068226527.3212726541
>  6 281839893606.99365 281839893606.9937234336
>  7 {color:#d04437}1716699575118595840{color} 1716699575118597095.4233081991
>  8 167361463828.0749 167361463828.0749132007
>  9 {color:#14892c}107511333880051856]{color} 107511333880052007", but got 
> "2 {color:#59afe1}2247902679199174[40{color} 224790267919917955.1326161858
>  4 7405685069595001 7405685069594999.0773399947
>  5 5068226527.321263 5068226527.3212726541
>  6 281839893606.99365 281839893606.9937234336
>  7 {color:#d04437}1716699575118595580{color} 1716699575118597095.4233081991
>  8 167361463828.0749 167361463828.0749132007
>  9 {color:#14892c}107511333880051872]{color} 107511333880052007" Result 
> did not match for query #496
>  SELECT t1.id1, t1.result, t2.expected
>  FROM num_result t1, num_exp_power_10_ln t2
>  WHERE t1.id1 = t2.id
>  AND t1.result != t2.expected (SQLQueryTestSuite.scala:362)
> {code}
> The first test failed, because the value of math.log(3.0) is different on 
> aarch64:
> # on x86_64:
> {code}
> scala> math.log(3.0)
> res50: Double = 1.0986122886681098
> {code}
> # on aarch64:
> {code}
> scala> math.log(3.0)
> res19: Double = 1.0986122886681096
> {code}
> And I tried {{math.log(4.0)}}, {{math.log(5.0)}} and they are same, I don't 
> know why {{math.log(3.0)}} is so special? But the result is different indeed 
> on aarch64.
> The second test failed, because some values of pow() is different on aarch64, 
> according to the test, I took tests on aarch64 and x86_64, take '-83028485' 
> as example:
> # on x86_64:
> {code}
> scala> import java.lang.Math._
> import java.lang.Math._
> scala> abs(-83028485)
> res3: Int = 83028485
> scala> var a = -83028485
> a: Int = -83028485
> scala> abs(a)
> res4: Int = 83028485
> scala> math.log(abs(a))
> res5: Double = 18.234694299654787
> scala> pow(10, math.log(abs(a)))
> res6: Double ={color:#d04437} 1.71669957511859584E18{color}
> {code}
> # on aarch64:
> {code}
> scala> var a = -83028485
> a: Int = -83028485
> scala> abs(a)
> res38: Int = 83028485
> scala> math.log(abs(a))
> res39: Double = 18.234694299654787
> scala> pow(10, math.log(abs(a)))
> res40: Double = 1.71669957511859558E18
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-28519) Tests failed on aarch64 due the value of math.log and power function is different

2019-07-28 Thread huangtianhua (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894854#comment-16894854
 ] 

huangtianhua edited comment on SPARK-28519 at 7/29/19 1:40 AM:
---

Thank you all. I will test with modification and to see whether there are other 
similar tests fail, and will address them together in one pull request.


was (Author: huangtianhua):
Thank you all. I will test with modification and to see whether there are other 
similar tests fail, and will address them togother in one pull request.

> Tests failed on aarch64 due the value of math.log and power function is 
> different
> -
>
> Key: SPARK-28519
> URL: https://issues.apache.org/jira/browse/SPARK-28519
> Project: Spark
>  Issue Type: Test
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Major
>
> Sorry to disturb again, we ran unit tests on arm64 instance, and there are 
> other sql tests failed:
> {code}
>  - pgSQL/float8.sql *** FAILED ***
>  Expected "{color:#f691b2}0.549306144334054[9]{color}", but got 
> "{color:#f691b2}0.549306144334054[8]{color}" Result did not match for query 
> #56
>  SELECT atanh(double('0.5')) (SQLQueryTestSuite.scala:362)
>  - pgSQL/numeric.sql *** FAILED ***
>  Expected "2 {color:#59afe1}2247902679199174[72{color} 
> 224790267919917955.1326161858
>  4 7405685069595001 7405685069594999.0773399947
>  5 5068226527.321263 5068226527.3212726541
>  6 281839893606.99365 281839893606.9937234336
>  7 {color:#d04437}1716699575118595840{color} 1716699575118597095.4233081991
>  8 167361463828.0749 167361463828.0749132007
>  9 {color:#14892c}107511333880051856]{color} 107511333880052007", but got 
> "2 {color:#59afe1}2247902679199174[40{color} 224790267919917955.1326161858
>  4 7405685069595001 7405685069594999.0773399947
>  5 5068226527.321263 5068226527.3212726541
>  6 281839893606.99365 281839893606.9937234336
>  7 {color:#d04437}1716699575118595580{color} 1716699575118597095.4233081991
>  8 167361463828.0749 167361463828.0749132007
>  9 {color:#14892c}107511333880051872]{color} 107511333880052007" Result 
> did not match for query #496
>  SELECT t1.id1, t1.result, t2.expected
>  FROM num_result t1, num_exp_power_10_ln t2
>  WHERE t1.id1 = t2.id
>  AND t1.result != t2.expected (SQLQueryTestSuite.scala:362)
> {code}
> The first test failed, because the value of math.log(3.0) is different on 
> aarch64:
> # on x86_64:
> {code}
> scala> math.log(3.0)
> res50: Double = 1.0986122886681098
> {code}
> # on aarch64:
> {code}
> scala> math.log(3.0)
> res19: Double = 1.0986122886681096
> {code}
> And I tried {{math.log(4.0)}}, {{math.log(5.0)}} and they are same, I don't 
> know why {{math.log(3.0)}} is so special? But the result is different indeed 
> on aarch64.
> The second test failed, because some values of pow() is different on aarch64, 
> according to the test, I took tests on aarch64 and x86_64, take '-83028485' 
> as example:
> # on x86_64:
> {code}
> scala> import java.lang.Math._
> import java.lang.Math._
> scala> abs(-83028485)
> res3: Int = 83028485
> scala> var a = -83028485
> a: Int = -83028485
> scala> abs(a)
> res4: Int = 83028485
> scala> math.log(abs(a))
> res5: Double = 18.234694299654787
> scala> pow(10, math.log(abs(a)))
> res6: Double ={color:#d04437} 1.71669957511859584E18{color}
> {code}
> # on aarch64:
> {code}
> scala> var a = -83028485
> a: Int = -83028485
> scala> abs(a)
> res38: Int = 83028485
> scala> math.log(abs(a))
> res39: Double = 18.234694299654787
> scala> pow(10, math.log(abs(a)))
> res40: Double = 1.71669957511859558E18
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28519) Tests failed on aarch64 due the value of math.log and power function is different

2019-07-26 Thread huangtianhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-28519:
-
Description: 
Sorry to disturb again, we ran unit tests on arm64 instance, and there are 
other sql tests failed:
 - pgSQL/float8.sql *** FAILED ***
 Expected "{color:#f691b2}0.549306144334054[9]{color}", but got 
"{color:#f691b2}0.549306144334054[8]{color}" Result did not match for query #56
 SELECT atanh(double('0.5')) (SQLQueryTestSuite.scala:362)
 - pgSQL/numeric.sql *** FAILED ***
 Expected "2 {color:#59afe1}2247902679199174[72{color} 
224790267919917955.1326161858
 4 7405685069595001 7405685069594999.0773399947
 5 5068226527.321263 5068226527.3212726541
 6 281839893606.99365 281839893606.9937234336
 7 {color:#d04437}1716699575118595840{color} 1716699575118597095.4233081991
 8 167361463828.0749 167361463828.0749132007
 9 {color:#14892c}107511333880051856]{color} 107511333880052007", but got 
"2 {color:#59afe1}2247902679199174[40{color} 224790267919917955.1326161858
 4 7405685069595001 7405685069594999.0773399947
 5 5068226527.321263 5068226527.3212726541
 6 281839893606.99365 281839893606.9937234336
 7 {color:#d04437}1716699575118595580{color} 1716699575118597095.4233081991
 8 167361463828.0749 167361463828.0749132007
 9 {color:#14892c}107511333880051872]{color} 107511333880052007" Result did 
not match for query #496
 SELECT t1.id1, t1.result, t2.expected
 FROM num_result t1, num_exp_power_10_ln t2
 WHERE t1.id1 = t2.id
 AND t1.result != t2.expected (SQLQueryTestSuite.scala:362)

The first test failed, because the value of math.log(3.0) is different on 
aarch64:

# on x86_64:
 scala> math.log(3.0)
 res50: Double = 1.0986122886681098

# on aarch64:
 scala> math.log(3.0)
 res19: Double = 1.0986122886681096

And I tried math.log(4.0), math.log(5.0) and they are same, I don't know why 
math.log(3.0) is so special? But the result is different indeed on aarch64.

The second test failed, because some values of pow() is different on aarch64, 
according to the test, I took tests on aarch64 and x86_64, take '-83028485' as 
example:

# on x86_64:
 scala> import java.lang.Math._
 import java.lang.Math._
 scala> abs(-83028485)
 res3: Int = 83028485
 scala> var a = -83028485
 a: Int = -83028485
 scala> abs(a)
 res4: Int = 83028485
 scala> math.log(abs(a))
 res5: Double = 18.234694299654787
 scala> pow(10, math.log(abs(a)))
 res6: Double ={color:#d04437} 1.71669957511859584E18{color}

{color:#d04437}# {color}on aarch64:

scala> var a = -83028485

a: Int = -83028485

scala> abs(a)

res38: Int = 83028485

scala> math.log(abs(a))

res39: Double = 18.234694299654787

scala> pow(10, math.log(abs(a)))

res40: Double = {color:#d04437}1.71669957511859558E18{color}

  was:
Sorry to disturb again, we ran unit tests on arm64 instance, and there are 
other sql tests failed:
 - pgSQL/float8.sql *** FAILED ***
 Expected "{color:#f691b2}0.549306144334054[9]{color}", but got 
"{color:#f691b2}0.549306144334054[8]{color}" Result did not match for query #56
 SELECT atanh(double('0.5')) (SQLQueryTestSuite.scala:362)
 - pgSQL/numeric.sql *** FAILED ***
 Expected "2 {color:#59afe1}2247902679199174[72{color} 
224790267919917955.1326161858
 4 7405685069595001 7405685069594999.0773399947
 5 5068226527.321263 5068226527.3212726541
 6 281839893606.99365 281839893606.9937234336
 7 {color:#d04437}1716699575118595840{color} 1716699575118597095.4233081991
 8 167361463828.0749 167361463828.0749132007
 9 {color:#14892c}107511333880051856]{color} 107511333880052007", but got 
"2 {color:#59afe1}2247902679199174[40{color} 224790267919917955.1326161858
 4 7405685069595001 7405685069594999.0773399947
 5 5068226527.321263 5068226527.3212726541
 6 281839893606.99365 281839893606.9937234336
 7 {color:#d04437}1716699575118595580{color} 1716699575118597095.4233081991
 8 167361463828.0749 167361463828.0749132007
 9 {color:#14892c}107511333880051872]{color} 107511333880052007" Result did 
not match for query #496
 SELECT t1.id1, t1.result, t2.expected
 FROM num_result t1, num_exp_power_10_ln t2
 WHERE t1.id1 = t2.id
 AND t1.result != t2.expected (SQLQueryTestSuite.scala:362)

The first test failed, because the value of math.log(3.0) is different on 
aarch64:

# on x86_64:
 scala> math.log(3.0)
 res50: Double = 1.0986122886681098

# on aarch64:
 scala> math.log(3.0)
 res19: Double = 1.0986122886681096

And I tried math.log(4.0), math.log(5.0) and they are same, I don't know why 
math.log(3.0) is so special? But the result is different indeed on aarch64.

The second test failed, because some values of pow() is different on aarch64, 
according to the test, I took tests on aarch64 and x86_64, take '-83028485' as 
example:

# on x86_64:
 scala> import java.lang.Math._
 import java.lang.Math._
 scala> abs(-83028485)
 res3: Int = 83028485
 scala> var a = -83028485
 a: Int = -83028485
 scala> abs(a)
 res4: Int = 83028485
 scala> math.log(abs(a))
 res5: 

[jira] [Created] (SPARK-28519) Tests failed on aarch64 due the value of math.log and power function is different

2019-07-26 Thread huangtianhua (JIRA)
huangtianhua created SPARK-28519:


 Summary: Tests failed on aarch64 due the value of math.log and 
power function is different
 Key: SPARK-28519
 URL: https://issues.apache.org/jira/browse/SPARK-28519
 Project: Spark
  Issue Type: Test
  Components: SQL
Affects Versions: 3.0.0
Reporter: huangtianhua


Sorry to disturb again, we ran unit tests on arm64 instance, and there are 
other sql tests failed:
 - pgSQL/float8.sql *** FAILED ***
 Expected "{color:#f691b2}0.549306144334054[9]{color}", but got 
"{color:#f691b2}0.549306144334054[8]{color}" Result did not match for query #56
 SELECT atanh(double('0.5')) (SQLQueryTestSuite.scala:362)
 - pgSQL/numeric.sql *** FAILED ***
 Expected "2 {color:#59afe1}2247902679199174[72{color} 
224790267919917955.1326161858
 4 7405685069595001 7405685069594999.0773399947
 5 5068226527.321263 5068226527.3212726541
 6 281839893606.99365 281839893606.9937234336
 7 {color:#d04437}1716699575118595840{color} 1716699575118597095.4233081991
 8 167361463828.0749 167361463828.0749132007
 9 {color:#14892c}107511333880051856]{color} 107511333880052007", but got 
"2 {color:#59afe1}2247902679199174[40{color} 224790267919917955.1326161858
 4 7405685069595001 7405685069594999.0773399947
 5 5068226527.321263 5068226527.3212726541
 6 281839893606.99365 281839893606.9937234336
 7 {color:#d04437}1716699575118595580{color} 1716699575118597095.4233081991
 8 167361463828.0749 167361463828.0749132007
 9 {color:#14892c}107511333880051872]{color} 107511333880052007" Result did 
not match for query #496
 SELECT t1.id1, t1.result, t2.expected
 FROM num_result t1, num_exp_power_10_ln t2
 WHERE t1.id1 = t2.id
 AND t1.result != t2.expected (SQLQueryTestSuite.scala:362)

The first test failed, because the value of math.log(3.0) is different on 
aarch64:

# on x86_64:
 scala> math.log(3.0)
 res50: Double = 1.0986122886681098

# on aarch64:
 scala> math.log(3.0)
 res19: Double = 1.0986122886681096

And I tried math.log(4.0), math.log(5.0) and they are same, I don't know why 
math.log(3.0) is so special? But the result is different indeed on aarch64.

The second test failed, because some values of pow() is different on aarch64, 
according to the test, I took tests on aarch64 and x86_64, take '-83028485' as 
example:

# on x86_64:
 scala> import java.lang.Math._
 import java.lang.Math._
 scala> abs(-83028485)
 res3: Int = 83028485
 scala> var a = -83028485
 a: Int = -83028485
 scala> abs(a)
 res4: Int = 83028485
 scala> math.log(abs(a))
 res5: Double = 18.234694299654787
 scala> pow(10, math.log(abs(a)))
 res6: Double ={color:#d04437} 1.71669957511859584E18{color}

# on aarch64:
 scala> var a = -83028485
 a: Int = -83028485
 scala> abs(a)
 res38: Int = 83028485
 scala> math.log(abs(a))
 res39: Double = 18.234694299654787
 scala> pow(10, math.log(abs(a)))
 res40: Double = {color:#d04437}1.71669957511859558E18{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28467) Tests failed if there are not enough executors up before running

2019-07-21 Thread huangtianhua (JIRA)
huangtianhua created SPARK-28467:


 Summary: Tests failed if there are not enough executors up before 
running
 Key: SPARK-28467
 URL: https://issues.apache.org/jira/browse/SPARK-28467
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 3.0.0
Reporter: huangtianhua


We ran unit tests on arm64 instance, and there are tests failed due to the 
executor can't up under the timeout 1 ms:
- test driver discovery under local-cluster mode *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 1 executors before 1 
milliseconds elapsed
  at org.apache.spark.TestUtils$.waitUntilExecutorsUp(TestUtils.scala:293)
  at 
org.apache.spark.SparkContextSuite.$anonfun$new$78(SparkContextSuite.scala:753)
  at 
org.apache.spark.SparkContextSuite.$anonfun$new$78$adapted(SparkContextSuite.scala:741)
  at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
  at 
org.apache.spark.SparkContextSuite.$anonfun$new$77(SparkContextSuite.scala:741)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
  at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
  at org.scalatest.Transformer.apply(Transformer.scala:22)
  
- test gpu driver resource files and discovery under local-cluster mode *** 
FAILED ***
  java.util.concurrent.TimeoutException: Can't find 1 executors before 1 
milliseconds elapsed
  at org.apache.spark.TestUtils$.waitUntilExecutorsUp(TestUtils.scala:293)
  at 
org.apache.spark.SparkContextSuite.$anonfun$new$80(SparkContextSuite.scala:781)
  at 
org.apache.spark.SparkContextSuite.$anonfun$new$80$adapted(SparkContextSuite.scala:761)
  at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:161)
  at 
org.apache.spark.SparkContextSuite.$anonfun$new$79(SparkContextSuite.scala:761)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
  at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
  at org.scalatest.Transformer.apply(Transformer.scala:22)

And then we increase the timeout to 2(or 3) the tests passed, I found 
there are other issues about the timeout increasing before, see: 
https://issues.apache.org/jira/browse/SPARK-7989 and 
https://issues.apache.org/jira/browse/SPARK-10651 
I think the timeout doesn't work well, and seems there is no principle of the 
timeout setting, how can I fix this? Could I increase the timeout for these two 
tests?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28433) Incorrect assertion in scala test for aarch64 platform

2019-07-17 Thread huangtianhua (JIRA)
huangtianhua created SPARK-28433:


 Summary: Incorrect assertion in scala test for aarch64 platform
 Key: SPARK-28433
 URL: https://issues.apache.org/jira/browse/SPARK-28433
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 2.4.3, 3.0.0
Reporter: huangtianhua


We ran unit tests of spark on aarch64 server, here are two sql scala tests 
failed: 
- SPARK-26021: NaN and -0.0 in grouping expressions *** FAILED ***
   2143289344 equaled 2143289344 (DataFrameAggregateSuite.scala:732)
 - NaN and -0.0 in window partition keys *** FAILED ***
   2143289344 equaled 2143289344 (DataFrameWindowFunctionsSuite.scala:704)

we found the values of floatToRawIntBits(0.0f / 0.0f) and 
floatToRawIntBits(Float.NaN) on aarch64 are same(2143289344),  first we thought 
it's something about jdk or scala, but after discuss with jdk-dev and scala 
community see 
https://users.scala-lang.org/t/the-value-of-floattorawintbits-0-0f-0-0f-is-different-on-x86-64-and-aarch64-platforms/4845
 , we believe the value depends on the architecture.





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840188#comment-16840188
 ] 

huangtianhua commented on SPARK-27721:
--

[~hyukjin.kwon] Sorry, I have changed the type to question, and I am just ask 
for help. The link you mentioned seems isn't the same  error as this one. And 
also I want to know if spark support build on aarch64 platform? 

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Question
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> 

[jira] [Updated] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-27721:
-
Issue Type: Question  (was: Bug)

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Question
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithSkip(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> 

[jira] [Commented] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840179#comment-16840179
 ] 

huangtianhua commented on SPARK-27721:
--

The environment info is :
[root@arm-huangtianhua spark]# which java
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
You have new mail in /var/spool/mail/root
[root@arm-huangtianhua spark]# java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
[root@arm-huangtianhua spark]# which mvn
/usr/local/src/apache-maven-3.6.1/bin/mvn
[root@arm-huangtianhua spark]# uname -a
Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
2018 aarch64 aarch64 aarch64 GNU/Linux


> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> 

[jira] [Reopened] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua reopened SPARK-27721:
--

I used the cmd ./build/mvn test already.

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithSkip(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> 

[jira] [Updated] (SPARK-27721) spark ./build/mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-27721:
-
Summary: spark ./build/mvn test failed on aarch64  (was: spark mvn test 
failed on aarch64)

> spark ./build/mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithSkip(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> 

[jira] [Commented] (SPARK-27721) spark mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840162#comment-16840162
 ] 

huangtianhua commented on SPARK-27721:
--

[~hyukjin.kwon] Thank you very much. Yes, I followed the guide and the error 
raised. My platform is centos7 aarch64, run ./build/mvn -DskipTests clean 
package is success, but run ./build/mvn test is failed.  


> spark mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.2 s  <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
>  Time elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>   Time elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
> elapsed: 0.001 s  <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
> at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> 

[jira] [Updated] (SPARK-27721) spark mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-27721:
-
Description: 
./build/mvn test failed on spark master and branch -2.4

And the error log pieces as below:


[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 s 
- in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
[INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 s 
- in org.apache.spark.util.kvstore.InMemoryStoreSuite
[INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
[INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 s 
- in org.apache.spark.util.kvstore.InMemoryIteratorSuite
[INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
[ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 0.23 
s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
[ERROR] 
copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
  Time elapsed: 0.2 s  <<< ERROR!
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, 
no leveldbjni in java.library.path, 
/usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
 
/usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
 cannot open shared object file: No such file or directory (Possible cause: 
can't load AMD 64-bit .so on a AARCH64-bit platform)]
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  
Time elapsed: 0 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] 
numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
  Time elapsed: 0.001 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] copyIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
 Time elapsed: 0 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] childIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
 Time elapsed: 0 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] childIndexWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  
Time elapsed: 0.001 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] childIndexWithSkip(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  
Time elapsed: 0 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] childIndexWithMax(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  
Time elapsed: 0 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] 
naturalIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  
Time elapsed: 0.001 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] 
numericIndexDescendingWithLast(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
  Time elapsed: 0.001 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] 
childIndexDescending(org.apache.spark.util.kvstore.LevelDBIteratorSuite)  Time 
elapsed: 0 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize 

[jira] [Updated] (SPARK-27721) spark mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huangtianhua updated SPARK-27721:
-
Attachment: branch2.4-mvn test

> spark mvn test failed on aarch64
> 
>
> Key: SPARK-27721
> URL: https://issues.apache.org/jira/browse/SPARK-27721
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.4.0, 3.0.0
> Environment: [root@arm-huangtianhua spark]# which java
> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
> You have new mail in /var/spool/mail/root
> [root@arm-huangtianhua spark]# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> [root@arm-huangtianhua spark]# which mvn
> /usr/local/src/apache-maven-3.6.1/bin/mvn
> [root@arm-huangtianhua spark]# uname -a
> Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
> 2018 aarch64 aarch64 aarch64 GNU/Linux
>Reporter: huangtianhua
>Priority: Major
> Attachments: branch2.4-mvn test
>
>
> ./build/mvn test failed on spark master and branch -2.4
> And the error log pieces as below:
> [INFO] T E S T S
> [INFO] ---
> [INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
> s - in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.spark.util.kvstore.InMemoryStoreSuite
> [INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 
> s - in org.apache.spark.util.kvstore.InMemoryIteratorSuite
> [INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 
> 0.23 s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
> [ERROR] 
> copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>  Time elapsed: 0.2 s <<< ERROR!
> java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
> leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in 
> java.library.path, no leveldbjni in java.library.path, 
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  
> /usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
>  cannot open shared object file: No such file or directory (Possible cause: 
> can't load AMD 64-bit .so on a AARCH64-bit platform)]
>  at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
> Time elapsed: 0 s <<< ERROR!
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.fusesource.leveldbjni.JniDBFactory
>  at 
> org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)
> [ERROR] 
> numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
>  Time elap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-27721) spark mvn test failed on aarch64

2019-05-15 Thread huangtianhua (JIRA)
huangtianhua created SPARK-27721:


 Summary: spark mvn test failed on aarch64
 Key: SPARK-27721
 URL: https://issues.apache.org/jira/browse/SPARK-27721
 Project: Spark
  Issue Type: Bug
  Components: Tests
Affects Versions: 2.4.0, 3.0.0
 Environment: [root@arm-huangtianhua spark]# which java
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.aarch64/bin/java
You have new mail in /var/spool/mail/root
[root@arm-huangtianhua spark]# java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
[root@arm-huangtianhua spark]# which mvn
/usr/local/src/apache-maven-3.6.1/bin/mvn
[root@arm-huangtianhua spark]# uname -a
Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 
2018 aarch64 aarch64 aarch64 GNU/Linux
Reporter: huangtianhua


./build/mvn test failed on spark master and branch -2.4

And the error log pieces as below:

[INFO] T E S T S
[INFO] ---
[INFO] Running org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 s 
- in org.apache.spark.util.kvstore.LevelDBTypeInfoSuite
[INFO] Running org.apache.spark.util.kvstore.InMemoryStoreSuite
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 s 
- in org.apache.spark.util.kvstore.InMemoryStoreSuite
[INFO] Running org.apache.spark.util.kvstore.InMemoryIteratorSuite
[INFO] Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 s 
- in org.apache.spark.util.kvstore.InMemoryIteratorSuite
[INFO] Running org.apache.spark.util.kvstore.LevelDBIteratorSuite
[ERROR] Tests run: 38, Failures: 0, Errors: 38, Skipped: 0, Time elapsed: 0.23 
s <<< FAILURE! - in org.apache.spark.util.kvstore.LevelDBIteratorSuite
[ERROR] 
copyIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
 Time elapsed: 0.2 s <<< ERROR!
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no 
leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, 
no leveldbjni in java.library.path, 
/usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
 
/usr/local/src/spark/common/kvstore/target/tmp/libleveldbjni-64-1-610267671268036503.8:
 cannot open shared object file: No such file or directory (Possible cause: 
can't load AMD 64-bit .so on a AARCH64-bit platform)]
 at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] refIndexWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite) 
Time elapsed: 0 s <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.fusesource.leveldbjni.JniDBFactory
 at 
org.apache.spark.util.kvstore.LevelDBIteratorSuite.createStore(LevelDBIteratorSuite.java:44)

[ERROR] 
numericIndexDescendingWithStart(org.apache.spark.util.kvstore.LevelDBIteratorSuite)
 Time elap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org