[ 
https://issues.apache.org/jira/browse/TINKERPOP-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15141330#comment-15141330
 ] 

ASF GitHub Bot commented on TINKERPOP-1041:
-------------------------------------------

Github user velo commented on the pull request:

    
https://github.com/apache/incubator-tinkerpop/pull/209#issuecomment-182506783
  
    Yes, I was not able to fix this errors.
    
    But if you run either master or tp311, more will fail
    
    On Thu, 11 Feb 2016 04:01 Jason Plurad <notificati...@github.com> wrote:
    
    > I'm running into some test failures when I do mvn clean install on
    > Windows. I built on Linux and Mac without running into this.
    >
    > Tests run: 9, Failures: 2, Errors: 2, Skipped: 0, Time elapsed: 37.053 
sec <<< FAILURE! - in org.apache.tinkerpop.gremli
    > n.spark.SparkHadoopGremlinTest
    > 
shouldSupportRemoveAndListMethods(org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck)
  Time elapsed
    > : 1.498 sec  <<< FAILURE!
    > java.lang.AssertionError: expected:<2> but was:<3>
    >         at org.junit.Assert.fail(Assert.java:88)
    >         at org.junit.Assert.failNotEquals(Assert.java:834)
    >         at org.junit.Assert.assertEquals(Assert.java:645)
    >         at org.junit.Assert.assertEquals(Assert.java:631)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.AbstractStorageCheck.checkRemoveAndListMethods(AbstractStora
    > geCheck.java:84)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck.shouldSupportRemoveAndListMethods(Fil
    > eSystemStorageCheck.java:58)
    >
    > 
shouldNotHaveResidualDataInStorage(org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck)
  Time elapse
    > d: 1.372 sec  <<< FAILURE!
    > java.lang.AssertionError: null
    >         at org.junit.Assert.fail(Assert.java:86)
    >         at org.junit.Assert.assertTrue(Assert.java:41)
    >         at org.junit.Assert.assertFalse(Assert.java:64)
    >         at org.junit.Assert.assertFalse(Assert.java:74)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.AbstractStorageCheck.checkResidualDataInStorage(AbstractStor
    > ageCheck.java:133)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck.shouldNotHaveResidualDataInStorage(Fi
    > leSystemStorageCheck.java:78)
    >
    > 
shouldSupportCopyMethods(org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck)
  Time elapsed: 0.702 s
    > ec  <<< ERROR!
    > java.util.concurrent.ExecutionException: 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory C:/home/p
    > 
luradj/src/github/apache/incubator-tinkerpop/TINKERPOP-1041/spark-gremlin/target/test-case-data/SparkHadoopGraphProvider
    > /graph-provider-data/clusterCount already exists
    >         at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
    >         at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.AbstractStorageCheck.checkCopyMethods(AbstractStorageCheck.j
    > ava:109)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck.shouldSupportCopyMethods(FileSystemSt
    > orageCheck.java:69)
    > Caused by: org.apache.hadoop.mapred.FileAlreadyExistsException: Output 
directory C:/home/pluradj/src/github/apache/incub
    > 
ator-tinkerpop/TINKERPOP-1041/spark-gremlin/target/test-case-data/SparkHadoopGraphProvider/graph-provider-data/clusterCo
    > unt already exists
    >         at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scal
    > a:1011)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    >         at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    >         at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:998)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:9
    > 38)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    >         at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    >         at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930)
    >         at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopFile(JavaPairRDD.scala:809)
    >         at 
org.apache.tinkerpop.gremlin.spark.structure.io.OutputFormatRDD.writeMemoryRDD(OutputFormatRDD.java:65)
    >         at 
org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer.lambda$submitWithExecutor$1(SparkGraph
    > Computer.java:271)
    >         at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
    >         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    >         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    >         at java.lang.Thread.run(Thread.java:745)
    >
    > 
shouldSupportHeadMethods(org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck)
  Time elapsed: 0.032 s
    > ec  <<< ERROR!
    > java.io.IOException: Unable to delete file: 
C:\home\pluradj\src\github\apache\incubator-tinkerpop\TINKERPOP-1041\spark-g
    > 
remlin\target\test-case-data\SparkHadoopGraphProvider\graph-provider-data\~reducing\part-r-00001
    >         at 
org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2279)
    >         at 
org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
    >         at 
org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
    >         at 
org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
    >         at 
org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
    >         at 
org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck.deleteDirectory(FileSystemStorageChec
    > k.java:101)
    >         at 
org.apache.tinkerpop.gremlin.hadoop.structure.io.FileSystemStorageCheck.shouldSupportHeadMethods(FileSystemSt
    > orageCheck.java:49)
    >
    > Running org.apache.tinkerpop.gremlin.spark.structure.io.InputOutputRDDTest
    > [WARN] 
org.apache.tinkerpop.gremlin.hadoop.process.computer.AbstractHadoopGraphComputer$Features
 - Unknown OutputFormat
    > class and thus, persistence options are unknown -- assuming all options 
are possible
    > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.346 sec 
- in org.apache.tinkerpop.gremlin.spark.struct
    > ure.io.InputOutputRDDTest
    > Running org.apache.tinkerpop.gremlin.spark.structure.io.InputRDDTest
    > Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.933 sec 
<<< FAILURE! - in org.apache.tinkerpop.gremlin
    > .spark.structure.io.InputRDDTest
    > 
shouldReadFromArbitraryRDD(org.apache.tinkerpop.gremlin.spark.structure.io.InputRDDTest)
  Time elapsed: 3.224 sec  <<< E
    > RROR!
    > java.lang.IllegalStateException: 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
C:/home/pluradj/s
    > 
rc/github/apache/incubator-tinkerpop/TINKERPOP-1041/spark-gremlin/target/test-case-data/InputRDDTest/shouldReadFromArbit
    > raryRDD/~reducing already exists
    >         at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
    >         at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
    >         at 
org.apache.tinkerpop.gremlin.process.computer.traversal.step.map.ComputerResultStep.processNextStart(Computer
    > ResultStep.java:80)
    >         at 
org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:126)
    >         at 
org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:37)
    >         at 
org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:157)
    >         at 
org.apache.tinkerpop.gremlin.spark.structure.io.InputRDDTest.shouldReadFromArbitraryRDD(InputRDDTest.java:56)
    >
    > Caused by: org.apache.hadoop.mapred.FileAlreadyExistsException: Output 
directory C:/home/pluradj/src/github/apache/incub
    > 
ator-tinkerpop/TINKERPOP-1041/spark-gremlin/target/test-case-data/InputRDDTest/shouldReadFromArbitraryRDD/~reducing
 alre
    > ady exists
    >         at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scal
    > a:1011)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    >         at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    >         at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:998)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:9
    > 38)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    >         at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    >         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    >         at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    >         at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930)
    >         at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopFile(JavaPairRDD.scala:809)
    >         at 
org.apache.tinkerpop.gremlin.spark.structure.io.OutputFormatRDD.writeMemoryRDD(OutputFormatRDD.java:65)
    >         at 
org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer.lambda$submitWithExecutor$1(SparkGraph
    > Computer.java:271)
    >         at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
    >         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    >         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    >         at java.lang.Thread.run(Thread.java:745)
    >
    > Travis seems happy, so it's something in my environment?
    >
    > Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 
2015-04-22T07:57:37-04:00)
    > Maven home: C:\home\pluradj\usr\lib\apache-maven-3.3.3\bin\..
    > Java version: 1.8.0_66, vendor: Oracle Corporation
    > Java home: C:\home\pluradj\usr\lib\jdk1.8.0_66\jre
    > Default locale: en_US, platform encoding: Cp1252
    > OS name: "windows 7", version: "6.1", arch: "amd64", family: "dos"
    >
    > —
    > Reply to this email directly or view it on GitHub
    > 
<https://github.com/apache/incubator-tinkerpop/pull/209#issuecomment-182410088>
    > .
    >



> StructureStandardTestSuite has file I/O issues on Windows
> ---------------------------------------------------------
>
>                 Key: TINKERPOP-1041
>                 URL: https://issues.apache.org/jira/browse/TINKERPOP-1041
>             Project: TinkerPop
>          Issue Type: Bug
>          Components: test-suite
>    Affects Versions: 3.0.2-incubating
>         Environment: Windows 10, Java 8, TinkerPop version "3.0.2-incubating"
>            Reporter: Martin Häusler
>            Assignee: Jason Plurad
>             Fix For: 3.1.2-incubating
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Most of the tests in StructureStandardTestSuite/IoGraphTest cause an 
> unexpected java.io.IOException. The stack trace looks like this:
> {panel:title=Stack Trace}
> java.io.IOException: The the file name, directory name or volume label syntax 
> is incorrect.
>       at java.io.WinNTFileSystem.createFileExclusively(Native Method)
>       at java.io.File.createTempFile(Unknown Source)
>       at 
> org.apache.tinkerpop.gremlin.TestHelper.generateTempFile(TestHelper.java:74)
>       at 
> org.apache.tinkerpop.gremlin.structure.io.IoGraphTest.shouldReadWriteModernToFileWithHelpers(IoGraphTest.java:164)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>       at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>       at java.lang.reflect.Method.invoke(Unknown Source)
> {panel}
> I'm running the test suite from Eclipse under Java 8, on a Windows 10 x64 
> machine. The dependencies in my project are managed with gradle. 
> Investigating the offinsive line 
> (org.apache.tinkerpop.gremlin.TestHelper.java@74) in the debugger reveals the 
> following parameters of "File.createTempFile(...)":
> {noformat}
> fileName = "shouldReadWriteModernToFileWithHelpers[graphml]"
> fileNameSuffix = ".xml"
> path = 
> "file:\D:\guh\caches\modules-2\files-2.1\org.apache.tinkerpop\gremlin-test\3.0.2-incubating\345ec87b74923b76374111f2e4040d4d105f256\temp"
> {noformat}
> The offensive part is the "path" variable, because it contains the prefix 
> "file:\". I tried the same thing in a dedicated JUnit test without the prefix 
> and it works fine.
> I would be very happy to see this issue fixed, as this considerably reduces 
> the amount of tests in the suite that I can run against my graph 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to