[jira] [Updated] (SPARK-18803) Fix path-related and JarEntry-related test failures and skip some tests failed on Windows due to path length limitation

2016-12-10 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-18803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-18803:
--
Assignee: Hyukjin Kwon

> Fix path-related and JarEntry-related test failures and skip some tests 
> failed on Windows due to path length limitation
> ---
>
> Key: SPARK-18803
> URL: https://issues.apache.org/jira/browse/SPARK-18803
> Project: Spark
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hyukjin Kwon
>Assignee: Hyukjin Kwon
>Priority: Minor
> Fix For: 2.2.0
>
>
> There are some tests being failed on Windows as below for several problems.
> *Incorrect path handling*
> - {{FileSuite}}
> {code}
> [info] - binary file input as byte array *** FAILED *** (500 milliseconds)
> [info]   
> "file:/C:/projects/spark/target/tmp/spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624/record-bytestream-0.bin"
>  did not contain 
> "C:\projects\spark\target\tmp\spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624\record-bytestream-0.bin"
>  (FileSuite.scala:258)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
> [info]   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
> [info]   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
> [info]   at 
> org.apache.spark.FileSuite$$anonfun$14.apply$mcV$sp(FileSuite.scala:258)
> [info]   at org.apache.spark.FileSuite$$anonfun$14.apply(FileSuite.scala:239)
> [info]   at org.apache.spark.FileSuite$$anonfun$14.apply(FileSuite.scala:239)
> ...
> {code}
> {code}
> [info] - Get input files via old Hadoop API *** FAILED *** (1 second, 94 
> milliseconds)
> [info]   
> Set("/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-0",
>  
> "/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-1")
>  did not equal 
> Set("C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-0",
>  
> "C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-1")
>  (FileSuite.scala:535)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
> [info]   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
> [info]   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
> [info]   at 
> org.apache.spark.FileSuite$$anonfun$29.apply$mcV$sp(FileSuite.scala:535)
> [info]   at org.apache.spark.FileSuite$$anonfun$29.apply(FileSuite.scala:524)
> [info]   at org.apache.spark.FileSuite$$anonfun$29.apply(FileSuite.scala:524)
> ...
> {code}
> {code}
> [info] - Get input files via new Hadoop API *** FAILED *** (313 milliseconds)
> [info]   
> Set("/C:/projects/spark/target/tmp/spark-12bc1540--4df6-9c4d-79e0e614407c/output/part-0",
>  
> "/C:/projects/spark/target/tmp/spark-12bc1540--4df6-9c4d-79e0e614407c/output/part-1")
>  did not equal 
> Set("C:\projects\spark\target\tmp\spark-12bc1540--4df6-9c4d-79e0e614407c\output/part-0",
>  
> "C:\projects\spark\target\tmp\spark-12bc1540--4df6-9c4d-79e0e614407c\output/part-1")
>  (FileSuite.scala:549)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
> [info]   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
> [info]   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
> [info]   at 
> org.apache.spark.FileSuite$$anonfun$30.apply$mcV$sp(FileSuite.scala:549)
> [info]   at org.apache.spark.FileSuite$$anonfun$30.apply(FileSuite.scala:538)
> [info]   at org.apache.spark.FileSuite$$anonfun$30.apply(FileSuite.scala:538)
> ...
> {code}
> - {{TaskResultGetterSuite}}
> {code}
> [info] - handling results larger than max RPC message size *** FAILED *** (1 
> second, 579 milliseconds)
> [info]   1 did not equal 0 Expect result to be removed from the block 
> manager. (TaskResultGetterSuite.scala:129)
> [info]   org.scalatest.exceptions.TestFailedException:
> [info]   at 
> org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
> [info]   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
> [info]   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
> [info]   at 
> org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply$mcV$sp(TaskResultGetterSuite.scala:129)
> [info]   at 
> org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply(TaskResu

[jira] [Updated] (SPARK-18803) Fix path-related and JarEntry-related test failures and skip some tests failed on Windows due to path length limitation

2016-12-09 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-18803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-18803:
-
Description: 
There are some tests being failed on Windows as below for several problems.

*Incorrect path handling*

- {{FileSuite}}

{code}
[info] - binary file input as byte array *** FAILED *** (500 milliseconds)
[info]   
"file:/C:/projects/spark/target/tmp/spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624/record-bytestream-0.bin"
 did not contain 
"C:\projects\spark\target\tmp\spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624\record-bytestream-0.bin"
 (FileSuite.scala:258)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.FileSuite$$anonfun$14.apply$mcV$sp(FileSuite.scala:258)
[info]   at org.apache.spark.FileSuite$$anonfun$14.apply(FileSuite.scala:239)
[info]   at org.apache.spark.FileSuite$$anonfun$14.apply(FileSuite.scala:239)
...
{code}


{code}
[info] - Get input files via old Hadoop API *** FAILED *** (1 second, 94 
milliseconds)
[info]   
Set("/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-0",
 
"/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-1")
 did not equal 
Set("C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-0",
 
"C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-1")
 (FileSuite.scala:535)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.FileSuite$$anonfun$29.apply$mcV$sp(FileSuite.scala:535)
[info]   at org.apache.spark.FileSuite$$anonfun$29.apply(FileSuite.scala:524)
[info]   at org.apache.spark.FileSuite$$anonfun$29.apply(FileSuite.scala:524)
...
{code}


{code}
[info] - Get input files via new Hadoop API *** FAILED *** (313 milliseconds)
[info]   
Set("/C:/projects/spark/target/tmp/spark-12bc1540--4df6-9c4d-79e0e614407c/output/part-0",
 
"/C:/projects/spark/target/tmp/spark-12bc1540--4df6-9c4d-79e0e614407c/output/part-1")
 did not equal 
Set("C:\projects\spark\target\tmp\spark-12bc1540--4df6-9c4d-79e0e614407c\output/part-0",
 
"C:\projects\spark\target\tmp\spark-12bc1540--4df6-9c4d-79e0e614407c\output/part-1")
 (FileSuite.scala:549)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.FileSuite$$anonfun$30.apply$mcV$sp(FileSuite.scala:549)
[info]   at org.apache.spark.FileSuite$$anonfun$30.apply(FileSuite.scala:538)
[info]   at org.apache.spark.FileSuite$$anonfun$30.apply(FileSuite.scala:538)
...
{code}


- {{TaskResultGetterSuite}}

{code}
[info] - handling results larger than max RPC message size *** FAILED *** (1 
second, 579 milliseconds)
[info]   1 did not equal 0 Expect result to be removed from the block manager. 
(TaskResultGetterSuite.scala:129)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply$mcV$sp(TaskResultGetterSuite.scala:129)
[info]   at 
org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply(TaskResultGetterSuite.scala:121)
[info]   at 
org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply(TaskResultGetterSuite.scala:121)
[info]   ...
[info]   Cause: java.net.URISyntaxException: Illegal character in path at index 
12: 
string:///C:\projects\spark\target\tmp\spark-93c485af-68da-440f-a907-aac7acd5fc25\repro\MyException.java
[info]   at java.net.URI$Parser.fail(URI.java:2848)
[info]   at java.net.URI$Parser.checkChars(URI.java:3021)
[info]   at java.net.URI$Parser.parseHierarchical(URI.java:3105)
[info]   at java.net.URI$Parser.parse(URI.java:3053)
[info]   at java.net.URI.(URI.java:588)
[info]   at java.net.URI.create(URI.java:850)
[info]   at 
org.apache.spark.TestUtils$.org$apache$spark$TestUtils$$createURI(TestUtils.scala:112

[jira] [Updated] (SPARK-18803) Fix path-related and JarEntry-related test failures and skip some tests failed on Windows due to path length limitation

2016-12-09 Thread Hyukjin Kwon (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-18803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-18803:
-
Description: 
There are some tests being failed on Windows as below for several problems.

*Incorrect path handling*

- {{FileSuite}}

{code}
[info] - binary file input as byte array *** FAILED *** (500 milliseconds)
[info]   
"file:/C:/projects/spark/target/tmp/spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624/record-bytestream-0.bin"
 did not contain 
"C:\projects\spark\target\tmp\spark-e7c3a3b8-0a4b-4a7f-9ebe-7c4883e48624\record-bytestream-0.bin"
 (FileSuite.scala:258)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.FileSuite$$anonfun$14.apply$mcV$sp(FileSuite.scala:258)
[info]   at org.apache.spark.FileSuite$$anonfun$14.apply(FileSuite.scala:239)
[info]   at org.apache.spark.FileSuite$$anonfun$14.apply(FileSuite.scala:239)
...
{code}


{code}
[info] - Get input files via old Hadoop API *** FAILED *** (1 second, 94 
milliseconds)
[info]   
Set("/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-0",
 
"/C:/projects/spark/target/tmp/spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200/output/part-1")
 did not equal 
Set("C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-0",
 
"C:\projects\spark\target\tmp\spark-cf5b1f8b-c5ed-43e0-8d17-546ebbfa8200\output/part-1")
 (FileSuite.scala:535)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.FileSuite$$anonfun$29.apply$mcV$sp(FileSuite.scala:535)
[info]   at org.apache.spark.FileSuite$$anonfun$29.apply(FileSuite.scala:524)
[info]   at org.apache.spark.FileSuite$$anonfun$29.apply(FileSuite.scala:524)
...
{code}


{code}
[info] - Get input files via new Hadoop API *** FAILED *** (313 milliseconds)
[info]   
Set("/C:/projects/spark/target/tmp/spark-12bc1540--4df6-9c4d-79e0e614407c/output/part-0",
 
"/C:/projects/spark/target/tmp/spark-12bc1540--4df6-9c4d-79e0e614407c/output/part-1")
 did not equal 
Set("C:\projects\spark\target\tmp\spark-12bc1540--4df6-9c4d-79e0e614407c\output/part-0",
 
"C:\projects\spark\target\tmp\spark-12bc1540--4df6-9c4d-79e0e614407c\output/part-1")
 (FileSuite.scala:549)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.FileSuite$$anonfun$30.apply$mcV$sp(FileSuite.scala:549)
[info]   at org.apache.spark.FileSuite$$anonfun$30.apply(FileSuite.scala:538)
[info]   at org.apache.spark.FileSuite$$anonfun$30.apply(FileSuite.scala:538)
...
{code}


- {{TaskResultGetterSuite}}

{code}
[info] - handling results larger than max RPC message size *** FAILED *** (1 
second, 579 milliseconds)
[info]   1 did not equal 0 Expect result to be removed from the block manager. 
(TaskResultGetterSuite.scala:129)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:500)
[info]   at 
org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
[info]   at 
org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:466)
[info]   at 
org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply$mcV$sp(TaskResultGetterSuite.scala:129)
[info]   at 
org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply(TaskResultGetterSuite.scala:121)
[info]   at 
org.apache.spark.scheduler.TaskResultGetterSuite$$anonfun$4.apply(TaskResultGetterSuite.scala:121)
[info]   ...
[info]   Cause: java.net.URISyntaxException: Illegal character in path at index 
12: 
string:///C:\projects\spark\target\tmp\spark-93c485af-68da-440f-a907-aac7acd5fc25\repro\MyException.java
[info]   at java.net.URI$Parser.fail(URI.java:2848)
[info]   at java.net.URI$Parser.checkChars(URI.java:3021)
[info]   at java.net.URI$Parser.parseHierarchical(URI.java:3105)
[info]   at java.net.URI$Parser.parse(URI.java:3053)
[info]   at java.net.URI.(URI.java:588)
[info]   at java.net.URI.create(URI.java:850)
[info]   at 
org.apache.spark.TestUtils$.org$apache$spark$TestUtils$$createURI(TestUtils.scala:112