Re: mvn test error

2014-08-19 Thread scwf

hi,Cheng Lian
  thanks, printing stdout/stderr of the forked process is more reasonable.

On 2014/8/19 13:35, Cheng Lian wrote:

The exception indicates that the forked process doesn’t executed as expected, 
thus the test case /should/ fail.

Instead of replacing the exception with a |logWarning|, capturing and printing 
stdout/stderr of the forked process can be helpful for diagnosis. Currently the 
only information we have at hand is the process exit code, it’s hard to 
determine the reason why the forked process fails.

​


On Tue, Aug 19, 2014 at 1:27 PM, scwf wangf...@huawei.com 
mailto:wangf...@huawei.com wrote:

hi, all
   I notice that jenkins may also throw this error when running 
tests(https://amplab.cs.__berkeley.edu/jenkins/job/__SparkPullRequestBuilder/18688/__consoleFull
 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18688/consoleFull).


This is because in Utils.executeAndGetOutput our progress exitCode is not 
0, may be we should logWarning here rather than throw a exception?

Utils.executeAndGetOutput {
 val exitCode = process.waitFor()
 stdoutThread.join()   // Wait for it to finish reading output
 if (exitCode != 0) {
   throw new SparkException(Process  + command +  exited with code  
+ exitCode)
 }
}

any idea?



On 2014/8/15 11:01, scwf wrote:

env: ubuntu 14.04 + spark master buranch

mvn -Pyarn -Phive -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean 
package

mvn -Pyarn -Phadoop-2.4 -Phive test

test error:

DriverSuite:
Spark assembly has been built with Hive, including Datanucleus jars on 
classpath
- driver should exit after finishing *** FAILED ***
SparkException was thrown during property evaluation. 
(DriverSuite.scala:40)
  Message: Process List(./bin/spark-class, 
org.apache.spark.__DriverWithoutCleanup, local) exited with code 1
  Occurred at table row 0 (zero based, not counting headings), 
which had values (
master = local
  )

SparkSubmitSuite:
Spark assembly has been built with Hive, including Datanucleus jars on 
classpath
- launch simple application with spark-submit *** FAILED ***
org.apache.spark.__SparkException: Process List(./bin/spark-submit, 
--class, org.apache.spark.deploy.__SimpleApplicationTest, --name, testApp, 
--master, local, file:/tmp/1408015655220-0/__testJar-1408015655220.jar) exited 
with code 1

at 
org.apache.spark.util.Utils$.__executeAndGetOutput(Utils.__scala:810)
at 
org.apache.spark.deploy.__SparkSubmitSuite.__runSparkSubmit(__SparkSubmitSuite.scala:311)
at 
org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$14.__apply$mcV$sp(SparkSubmitSuite.__scala:291)
at 
org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$14.__apply(SparkSubmitSuite.scala:__284)
at 
org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$14.__apply(SparkSubmitSuite.scala:__284)
at 
org.scalatest.Transformer$$__anonfun$apply$1.apply(__Transformer.scala:22)
at 
org.scalatest.Transformer$$__anonfun$apply$1.apply(__Transformer.scala:22)
at org.scalatest.OutcomeOf$class.__outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.__outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.__apply(Transformer.scala:22)
...
Spark assembly has been built with Hive, including Datanucleus jars on 
classpath
- spark submit includes jars passed in through --jar *** FAILED ***
org.apache.spark.__SparkException: Process List(./bin/spark-submit, 
--class, org.apache.spark.deploy.__JarCreationTest, --name, testApp, --master, 
local-cluster[2,1,512], --jars, 
file:/tmp/1408015659416-0/__testJar-1408015659471.jar,fi
le:/tmp/1408015659472-0/__testJar-1408015659513.jar, 
file:/tmp/1408015659415-0/__testJar-1408015659416.jar) exited with code 1
at 
org.apache.spark.util.Utils$.__executeAndGetOutput(Utils.__scala:810)
at 
org.apache.spark.deploy.__SparkSubmitSuite.__runSparkSubmit(__SparkSubmitSuite.scala:311)
at 
org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$15.__apply$mcV$sp(SparkSubmitSuite.__scala:305)
at 
org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$15.__apply(SparkSubmitSuite.scala:__294)
at 
org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$15.__apply(SparkSubmitSuite.scala:__294)
at 
org.scalatest.Transformer$$__anonfun$apply$1.apply(__Transformer.scala:22)
at 
org.scalatest.Transformer$$__anonfun$apply$1.apply(__Transformer.scala:22)
at org.scalatest.OutcomeOf$class.__outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.__outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.__apply(Transformer.scala:22)
...


  

Re: mvn test error

2014-08-19 Thread Cheng Lian
Just FYI, thought this might be helpful, I'm refactoring Hive Thrift server
test suites. These suites also fork new processes and suffer similar
issues. Stdout and stderr of forked processes are logged in the new version
of test suites with utilities under scala.sys.process package
https://github.com/apache/spark/pull/1856/files


On Tue, Aug 19, 2014 at 2:55 PM, scwf wangf...@huawei.com wrote:

 hi,Cheng Lian
   thanks, printing stdout/stderr of the forked process is more reasonable.

 On 2014/8/19 13:35, Cheng Lian wrote:

 The exception indicates that the forked process doesn’t executed as
 expected, thus the test case /should/ fail.

 Instead of replacing the exception with a |logWarning|, capturing and
 printing stdout/stderr of the forked process can be helpful for diagnosis.
 Currently the only information we have at hand is the process exit code,
 it’s hard to determine the reason why the forked process fails.


 ​


 On Tue, Aug 19, 2014 at 1:27 PM, scwf wangf...@huawei.com mailto:
 wangf...@huawei.com wrote:

 hi, all
I notice that jenkins may also throw this error when running tests(
 https://amplab.cs.__berkeley.edu/jenkins/job/__
 SparkPullRequestBuilder/18688/__consoleFull https://amplab.cs.berkeley.
 edu/jenkins/job/SparkPullRequestBuilder/18688/consoleFull).



 This is because in Utils.executeAndGetOutput our progress exitCode is
 not 0, may be we should logWarning here rather than throw a exception?

 Utils.executeAndGetOutput {
  val exitCode = process.waitFor()
  stdoutThread.join()   // Wait for it to finish reading output
  if (exitCode != 0) {
throw new SparkException(Process  + command +  exited with
 code  + exitCode)
  }
 }

 any idea?



 On 2014/8/15 11:01, scwf wrote:

 env: ubuntu 14.04 + spark master buranch

 mvn -Pyarn -Phive -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests
 clean package

 mvn -Pyarn -Phadoop-2.4 -Phive test

 test error:

 DriverSuite:
 Spark assembly has been built with Hive, including Datanucleus
 jars on classpath
 - driver should exit after finishing *** FAILED ***
 SparkException was thrown during property evaluation.
 (DriverSuite.scala:40)
   Message: Process List(./bin/spark-class, 
 org.apache.spark.__DriverWithoutCleanup,
 local) exited with code 1

   Occurred at table row 0 (zero based, not counting
 headings), which had values (
 master = local
   )

 SparkSubmitSuite:
 Spark assembly has been built with Hive, including Datanucleus
 jars on classpath
 - launch simple application with spark-submit *** FAILED ***
 org.apache.spark.__SparkException: Process
 List(./bin/spark-submit, --class, 
 org.apache.spark.deploy.__SimpleApplicationTest,
 --name, testApp, --master, local, 
 file:/tmp/1408015655220-0/__testJar-1408015655220.jar)
 exited with code 1

 at org.apache.spark.util.Utils$._
 _executeAndGetOutput(Utils.__scala:810)
 at org.apache.spark.deploy.__SparkSubmitSuite.__
 runSparkSubmit(__SparkSubmitSuite.scala:311)
 at org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$14._
 _apply$mcV$sp(SparkSubmitSuite.__scala:291)
 at org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$14._
 _apply(SparkSubmitSuite.scala:__284)
 at org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$14._
 _apply(SparkSubmitSuite.scala:__284)
 at org.scalatest.Transformer$$__anonfun$apply$1.apply(__
 Transformer.scala:22)
 at org.scalatest.Transformer$$__anonfun$apply$1.apply(__
 Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.__outcomeOf(OutcomeOf.scala:
 85)
 at org.scalatest.OutcomeOf$.__outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.__apply(Transformer.scala:22)

 ...
 Spark assembly has been built with Hive, including Datanucleus
 jars on classpath
 - spark submit includes jars passed in through --jar *** FAILED
 ***
 org.apache.spark.__SparkException: Process
 List(./bin/spark-submit, --class, org.apache.spark.deploy.__JarCreationTest,
 --name, testApp, --master, local-cluster[2,1,512], --jars,
 file:/tmp/1408015659416-0/__testJar-1408015659471.jar,fi
 le:/tmp/1408015659472-0/__testJar-1408015659513.jar,
 file:/tmp/1408015659415-0/__testJar-1408015659416.jar) exited with code 1
 at org.apache.spark.util.Utils$._
 _executeAndGetOutput(Utils.__scala:810)
 at org.apache.spark.deploy.__SparkSubmitSuite.__
 runSparkSubmit(__SparkSubmitSuite.scala:311)
 at org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$15._
 _apply$mcV$sp(SparkSubmitSuite.__scala:305)
 at org.apache.spark.deploy.__SparkSubmitSuite$$anonfun$15._
 _apply(SparkSubmitSuite.scala:__294)
 at 

Re: mvn test error

2014-08-18 Thread Cheng Lian
The exception indicates that the forked process doesn’t executed as
expected, thus the test case *should* fail.

Instead of replacing the exception with a logWarning, capturing and
printing stdout/stderr of the forked process can be helpful for diagnosis.
Currently the only information we have at hand is the process exit code,
it’s hard to determine the reason why the forked process fails.
​


On Tue, Aug 19, 2014 at 1:27 PM, scwf wangf...@huawei.com wrote:

 hi, all
   I notice that jenkins may also throw this error when running tests(
 https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18688/
 consoleFull).


 This is because in Utils.executeAndGetOutput our progress exitCode is not
 0, may be we should logWarning here rather than throw a exception?

 Utils.executeAndGetOutput {
 val exitCode = process.waitFor()
 stdoutThread.join()   // Wait for it to finish reading output
 if (exitCode != 0) {
   throw new SparkException(Process  + command +  exited with code 
 + exitCode)
 }
 }

 any idea?



 On 2014/8/15 11:01, scwf wrote:

 env: ubuntu 14.04 + spark master buranch

 mvn -Pyarn -Phive -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean
 package

 mvn -Pyarn -Phadoop-2.4 -Phive test

 test error:

 DriverSuite:
 Spark assembly has been built with Hive, including Datanucleus jars on
 classpath
 - driver should exit after finishing *** FAILED ***
SparkException was thrown during property evaluation.
 (DriverSuite.scala:40)
  Message: Process List(./bin/spark-class, 
 org.apache.spark.DriverWithoutCleanup,
 local) exited with code 1
  Occurred at table row 0 (zero based, not counting headings), which
 had values (
master = local
  )

 SparkSubmitSuite:
 Spark assembly has been built with Hive, including Datanucleus jars on
 classpath
 - launch simple application with spark-submit *** FAILED ***
org.apache.spark.SparkException: Process List(./bin/spark-submit,
 --class, org.apache.spark.deploy.SimpleApplicationTest, --name, testApp,
 --master, local, file:/tmp/1408015655220-0/testJar-1408015655220.jar)
 exited with code 1

at org.apache.spark.util.Utils$.executeAndGetOutput(Utils.scala:810)
at org.apache.spark.deploy.SparkSubmitSuite.runSparkSubmit(
 SparkSubmitSuite.scala:311)
at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$14.
 apply$mcV$sp(SparkSubmitSuite.scala:291)
at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$14.
 apply(SparkSubmitSuite.scala:284)
at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$14.
 apply(SparkSubmitSuite.scala:284)
at org.scalatest.Transformer$$anonfun$apply$1.apply(
 Transformer.scala:22)
at org.scalatest.Transformer$$anonfun$apply$1.apply(
 Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
...
 Spark assembly has been built with Hive, including Datanucleus jars on
 classpath
 - spark submit includes jars passed in through --jar *** FAILED ***
org.apache.spark.SparkException: Process List(./bin/spark-submit,
 --class, org.apache.spark.deploy.JarCreationTest, --name, testApp,
 --master, local-cluster[2,1,512], --jars, file:/tmp/1408015659416-0/
 testJar-1408015659471.jar,fi
 le:/tmp/1408015659472-0/testJar-1408015659513.jar,
 file:/tmp/1408015659415-0/testJar-1408015659416.jar) exited with code 1
at org.apache.spark.util.Utils$.executeAndGetOutput(Utils.scala:810)
at org.apache.spark.deploy.SparkSubmitSuite.runSparkSubmit(
 SparkSubmitSuite.scala:311)
at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$15.
 apply$mcV$sp(SparkSubmitSuite.scala:305)
at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$15.
 apply(SparkSubmitSuite.scala:294)
at org.apache.spark.deploy.SparkSubmitSuite$$anonfun$15.
 apply(SparkSubmitSuite.scala:294)
at org.scalatest.Transformer$$anonfun$apply$1.apply(
 Transformer.scala:22)
at org.scalatest.Transformer$$anonfun$apply$1.apply(
 Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
...


 but only test the specific suite as follows will be ok:
 mvn -Pyarn -Phadoop-2.4 -Phive -DwildcardSuites=org.apache.spark.DriverSuite
 test

 it seems when run with mvn -Pyarn -Phadoop-2.4 -Phive test,the process
 with Utils.executeAndGetOutput started can not exited successfully
 (exitcode is not zero)

 anyone has idea for this?






 --

 Best Regards
 Fei Wang

 
 



 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org