Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54930425
@andrewor14 @JoshRosen Bingo! Adding `spark.ui.port0/spark.ui.port` to
the Maven build makes `SparkSubmitSuite` pass for me where it failed before.
This was already set
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55001547
@srowen, it is cool but can not explain SBT test failure. we can use this
pr to test a few times to diagnose SBT test problem.
---
If your project is set up for it, you
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/2108#discussion_r17313774
--- Diff: core/src/test/scala/org/apache/spark/DriverSuite.scala ---
@@ -18,9 +18,9 @@
package org.apache.spark
import java.io.File
+import
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55002361
@scwf The SBT build has already set `-Dspark.ui.port=0`, so I suspect this
is in fact the difference:
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/2108#discussion_r17314200
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -869,6 +871,7 @@ private[spark] object Utils extends Logging {
val exitCode =
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55003859
@srowen ,yeah, we should set ```spark.ui.port``` in maven, but 'exited with
code 1' error also happened when test using SBT, there may be some other cause
behind this
---
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55004896
I wonder if this is a resource contention issue from having many parallel
copies of the tests running on the same Jenkins worker. For example, we might
be [exhausting
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55003995
@scwf Yes there is still some underlying issue here, where tests hold open
ports somehow for a long time. Randomizing the starting port _usually_ avoids
most collision,
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55008276
@JoshRosen, exhausting ephemeral ports has small probability, if so we
should reduce ephemeral ports. Or we can just verify the port available before
using it and try
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55016146
@scwf Actually we already try several times before getting a free one. My
interpretation of this is we simply ran out of ports, such that no matter how
many times we
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2108
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-55016211
For now, I will merge this because this helps us debug these test failures.
Thanks.
---
If your project is set up for it, you can reply to this email and have your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54876034
I SSH'ed into one of the Jenkins boxes and ran the Maven build using this,
which resulted in a very interesting error message when SparkSubmitSuite failed:
```
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54876278
Oh, and the test failure message helpfully included the actual spark-submit
command:
```java
- spark submit includes jars passed in through --jar ***
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54887895
Actually, let me make sure that it passes Jenkins first...
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54897713
@srowen Actually we already use random ports in SBT tests (by setting
`spark.ui.port` to 0). This may be why the tests are failing in MVN much more
frequently than in
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54899453
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54905473
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54908292
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2/consoleFull)
for PR 2108 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54909156
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20007/consoleFull)
for PR 2108 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54913894
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20007/consoleFull)
for PR 2108 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2108#discussion_r17281083
--- Diff: core/src/test/scala/org/apache/spark/DriverSuite.scala ---
@@ -18,9 +18,9 @@
package org.apache.spark
import java.io.File
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2108#discussion_r17281080
--- Diff: core/src/test/scala/org/apache/spark/DriverSuite.scala ---
@@ -18,9 +18,9 @@
package org.apache.spark
import java.io.File
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2108#discussion_r17281106
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -869,6 +871,7 @@ private[spark] object Utils extends Logging {
val exitCode
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54761870
These tests are reliably failing in [Jenkins Maven build for Spark Master
with
YARN](https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/).
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54694401
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-54697506
I see. I would like to see the `SparkSubmitSuite` and `DriverSuite` tests
fail with this PR so we know what the messages look like before we merge it.
Unfortunately
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53914223
@scwf Looks like the code already redirects all `stderr` of the subprocess
to the console. Also, if the process does fail, what we want is not the
`stdout` but
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53946673
@andrewor14, `stderr` may not enough, why i configure log4j is that
```scala
Logger.getRootLogger().setLevel(Level.WARN)
```
in `DriverSuite` is not valid
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53829497
@andrewor14, tests passed~
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53642993
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53643810
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19350/consoleFull)
for PR 2108 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53655348
**Tests timed out** after a configured wait of `120m`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53659736
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53659966
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19366/consoleFull)
for PR 2108 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53663314
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19366/consoleFull)
for PR 2108 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53237332
Hey !, Thanks for raising this concern.
The convention in spark is that we look in the
`[sub-project]/target/unit-tests.log`. And this is applicable to all
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53242393
Hi, @ScrapCodes, i think unit-tests.log is very big and it's hard to search
the matched log of a test suite. And the key point is that the jenkins failed
accidentally due
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53243180
Ahh, I am still not sure of the changes. May be the PR has more changes
than just fix what you said ?
---
If your project is set up for it, you can reply to this
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53243731
yeah, here i fixed the log4j config of the forked process, this is because
the old version is not valid. By the old version the forked process's
InputStream output nothing
GitHub user scwf opened a pull request:
https://github.com/apache/spark/pull/2108
[SPARK-3193]output errer info when Process exit code is not zero in test
suite
https://issues.apache.org/jira/browse/SPARK-3193
I noticed that sometimes pr tests failed due to the Process exitcode
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53194216
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
42 matches
Mail list logo