Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-57430441
Thank you @andrewor14
I've researched this problem these days with our environment and it turned
out to be a very rare case as @vanzin suggested first.
(like
Github user tsudukim closed the pull request at:
https://github.com/apache/spark/pull/1558
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2613
[SPARK-3757] mvn clean doesn't delete some files
Added directory to be deleted into maven-clean-plugin in pom.xml.
You can merge this pull request into a Git repository by running:
$ git pull
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/2612#issuecomment-57606210
Generally, using LF as EOL character of *.cmd files may cause some troubles.
For example, when the *.cmd file includes LF and multibyte character, some
characters
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2639
[SPARK-3774] typo comment in bin/utils.sh
Modified the comment of bin/utils.sh.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tsudukim/spark
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2640
[SPARK-3775] Not suitable error message in spark-shell.cmd
Modified some sentence of error message in bin\*.cmd.
You can merge this pull request into a Git repository by running:
$ git pull
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/2640#issuecomment-57772760
Yes, so I removed the specific recommendation for SBT.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2669
[SPARK-3808] PySpark fails to start in Windows
Modified syntax error of *.cmd script.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tsudukim
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/1516
[SPARK-2567] Resubmitted stage sometimes remains as active stage in the web
UI
Moved the line which post SparkListenerStageSubmitted to the back of check
of tasks size and serializability.
You
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1516#issuecomment-49909562
You can see the screenshot which the original code generated in the JIRA.
https://issues.apache.org/jira/browse/SPARK-2567
This screenshot was taken after the job
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1516#issuecomment-49919528
Hmm... I didn't notice it.
I'm going to rerun the test for confirmation as @xiajunluan 's commit
comment.
---
If your project is set up for it, you can reply
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1516#issuecomment-49940152
The test totally succeeded again.
If the @xiajunluan 's commit only aimed to avoid the unit test error, I
think it should be reversioned as this PR. But I'm
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1516#issuecomment-49940743
Hi @rxin, thank you for following this ticket but couldn't we separate
those problems into different PRs? SPARK-2298 is not about this problem.
I think we
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-49944747
@rxin Surely we can also fix them all in one patch. But it can be a little
bit hard work to modify them compatibly in one patch so I just have thought to
separate
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-49944917
@lianhuiwang It appears to be a different problem to SPARK-2298.
Is your aim same as this ticket?
https://issues.apache.org/jira/browse/SPARK-1362
If so, how
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/1558
[SPARK-2458] Make failed application log visible on History Server
Modified to show uncompleted applications in History Server ui.
Modified apps sort rule to startTime base (originally
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-49956854
We get the same ui as now by default.
![spark-2458-notinclude](https://cloud.githubusercontent.com/assets/8070366/3682544/dca4bb96-12cf-11e4-9965-0efa231babd9.png
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-50060159
Thank you for following this PR.
Let me explain a little.
I'm sorry I made you misunderstand my purpose with the improper word
uncompleted. The purpose
Github user tsudukim closed the pull request at:
https://github.com/apache/spark/pull/1516
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1516#issuecomment-50313978
SPARK-2567 is resolved by #1566.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2796
[SPARK-3946] gitignore in /python includes wrong directory
Modified to ignore not the docs/ directory, but only the docs/_build/ which
is the output directory of sphinx build.
You can merge
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2797
[SPARK-3943] Some scripts bin\*.cmd pollutes environment variables in
Windows
Modified not to pollute environment variables.
Just moved the main logic into `XXX2.cmd` from `XXX.cmd`, and call
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/2797#issuecomment-59027470
Please merge this *AFTER* #2796 is merged, because /python/docs/make2.bat
will be ignored by .gitignore in /python by mistake.
---
If your project is set up
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/2797#discussion_r18870526
--- Diff: bin/spark-shell2.cmd ---
@@ -0,0 +1,22 @@
+@echo off
+
+rem
+rem Licensed to the Apache Software Foundation (ASF) under one or more
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/2797#issuecomment-59146580
@andrewor14 thank you for following this PR.
Yes that's I mean.
I'm not observing any problems, this is just a safeguard. Polluting
environment might affect noy
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/1350
SPARK-2115: Stage kill link is too close to stage details link
Moved (kill) link to the right side. Add confirmation dialog when (kill)
link is clicked.
You can merge this pull request into a Git
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1350#issuecomment-48555944
Modified image is as followings:
![spark-2115-img1](https://cloud.githubusercontent.com/assets/8070366/3533455/f9fb6fcc-07d1-11e4-9e97-653000957e11.png)
![spark
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1350#issuecomment-48558169
I nearly misclicked many times when I tried to go to the stage detail page.
And of course, you can kill the stage by clicking the OK button at
confirmation dialog
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/1384
SPARK-2298: Show stage attempt in UI
Added attempt ID column into stage page of webUI.
Added attemptId handling code into StageInfo, JsonProtocol.
Modified DAGScheduler to identify stages
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-48800698
Attempt Id shows up in web ui. Submitted and Duration became individual
value to stage attempts.
![spark-2298](https://cloud.githubusercontent.com/assets/8070366
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-48802747
@pwendell Thank you for your response. You mean like this?
![spark-2298-2](https://cloud.githubusercontent.com/assets/8070366/3560678/177e6e52-0983-11e4-995e
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-48803681
@andrewor14 Thank you for your comment.
I think it is more weird if the display style of ID/attempt changes by
conditions.
Surely most stages will only have 1
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-48804094
@xrin OK, thanks. Then attempt id is still required in the web ui for users
to know stage + attempt. Have I got that right?
---
If your project is set up for it, you
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-48963364
I'm wondering how to show it. I gave it a shot. Is it smart?
![spark-2298-3](https://cloud.githubusercontent.com/assets/8070366/3577653/01e186ec-0b9f-11e4-930c
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1094#issuecomment-48990919
Is someone working on this? or facing some problem?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1094#issuecomment-49069859
OK, thank you for your reply.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/1384#discussion_r14976665
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -478,6 +479,7 @@ private[spark] object JsonProtocol {
def
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-49207731
@pwendell I agree that there are many room for improvement about handling
of stageId and attemptId. It might be better to break this problems into some
sub-tasks. I
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/1384#discussion_r15019018
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -478,6 +479,7 @@ private[spark] object JsonProtocol {
def
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-49209319
@rxin OK. After that, I think I can make this patch better.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-49209857
@rxin in #1262, can I expect the key of the stagedata in
JobProgressListener become stageId + attemptId instead of stageId only?
---
If your project is set up
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1341#issuecomment-49250255
Looks good, but this patch seems to includes some unrelated diffs to
SPARK-2481.
* conf/spark-env.sh.template
* docs/spark-standalone.md
* sbin/spark
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1384#issuecomment-49495727
Modified PR as your comments. thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/1918
[SPARK-3006] Failed to execute spark-shell in Windows OS
Modified the order of the options and arguments in spark-shell.cmd
You can merge this pull request into a Git repository by running
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3279
[SPARK-4421] Wrong link in spark-standalone.html
Modified the link of building Spark.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tsudukim
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3280
[SPARK-4421] Wrong link in spark-standalone.html
Modified the link of building Spark. (backport version of #3279.)
You can merge this pull request into a Git repository by running:
$ git pull
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3280#issuecomment-63162188
I sent 2 PRs about
[SPARK-4421](https://issues.apache.org/jira/browse/SPARK-4421) because the page
name are different between Spark 1.2 and Spark 1.1
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3279#issuecomment-63162179
I sent 2 PRs about
[SPARK-4421](https://issues.apache.org/jira/browse/SPARK-4421) because the page
name are different between Spark 1.2 and Spark 1.1
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3280#issuecomment-63205228
Thank you @srowen for following this ticket.
I know PR should be for master generally, and I already sent one for master
(#3279).
But the details of modification
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3329
[SPARK-4464] Description about configuration options need to be modified in
docs.
Added description about -h and -host.
Modified description about -i and -ip which are now deprecated
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3350
[SPARK-3060] spark-shell.cmd doesn't accept application options in Windows
OS
Added equivalent module as utils.sh and modified spark-shell2.cmd to use it
to parse options.
Now we can use
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3467
[SPARK-2458] Make failed application log visible on History Server
Enabled HistoryServer to show incomplete applications.
We can see the log for incomplete applications by clicking the bottom
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-64509912
In this PR, the sort order is (-endTime, -startTime) which means that
the sorting is still end time for completed apps but is start time for
incomplete apps
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-64510079
And these are the screenshots of new UI.
A new link for the page of incomplete applications is added at bottom.
(Show incomplete applications)
![spark-2458
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-64510185
@andrewor14 I created a new PR (#3467) as your comment. Please check it.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3489
[SPARK-4634] Enable metrics for each application to be gathered in one node
Added configuration for adding top level name to the metrics name.
You can merge this pull request into a Git repository
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3489#issuecomment-64737931
Please see https://issues.apache.org/jira/browse/SPARK-4634 for detail of
this problem.
---
If your project is set up for it, you can reply to this email and have your
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3489#issuecomment-64816105
Sorry, GraphiteSink has already got the option prefix and it works fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tsudukim closed the pull request at:
https://github.com/apache/spark/pull/3489
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3500
[SPARK-4642] Documents about running-on-YARN needs update
Added descriptions about these parameters.
- spark.yarn.report.interval
- spark.yarn.queue
- spark.yarn.user.classpath.first
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3456#issuecomment-64919851
I don't think it's a good idea that we lose the way to sort tasks globally
by other than launch time.
About OOM, I wrote something to the JIRA ticket.
---
If your
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-64922238
@ryan-williams Thank you for your review!
I fixed them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3500#issuecomment-65169336
@sryza and @tgravescs Thank you for your review. I removed them. Only
`spark.yarn.queue` is added.
---
If your project is set up for it, you can reply to this email
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3560
[SPARK-4701] Typo in sbt/sbt
Modified typo.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tsudukim/spark feature/SPARK-4701
Alternatively
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3591#issuecomment-65638788
@pwendell Thank you for your comment. I quite agree that Windows script
like .cmd or .bat is very high cost for maintainance, but this time I used
PowerShell which
Github user tsudukim closed the pull request at:
https://github.com/apache/spark/pull/3280
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3280#issuecomment-65834528
Thank you! @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3591#issuecomment-65837669
I wonder which is good but I tend not to think to submit this to upstream.
It is a good idea if this was made from the latest sbt script, but
unfortunately this is made
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3350#issuecomment-66720899
Hi @andrewor14 yes I've tested on my environment. Would you check it?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-66810500
@andrewor14 Thank you for following this ticket.
I finished rebasing to master.
---
If your project is set up for it, you can reply to this email and have your
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-68043299
Thank you for your comments! I'm going to do it in a few days!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/3467#discussion_r22270614
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -180,14 +176,15 @@ private[history] class FsHistoryProvider
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/4428
[SPARK-5396] Syntax error in spark scripts on windows.
Modified syntax error in spark-submit2.cmd. Command prompt doesn't have
defined operator.
You can merge this pull request into a Git
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3943#issuecomment-70769057
@sarutak @tgravescs @vanzin Thank you for your comments!
Though I don't have enogh time in these several days, I'm going to do it
until next week. Sorry
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3943#issuecomment-72012756
@vanzin Actually, test for org.apache.spark.deploy.yarn.* fails in Windows
even in master branch.
I just ignored only that error and checked some new errors
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-68837548
resolved conflicts!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3943#issuecomment-69281466
thanks @andrewor14 for following this.
I have tested it on two YARN clusters: 2.3 and 2.5. Both has 1 master and 3
slaves.
From Linux client, it works fine
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/3943
[SPARK-1825] Make Windows Spark client work fine with Linux YARN cluster
Modified environment strings and path separators to platform-independent
style if possible.
You can merge this pull
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5227#issuecomment-87900130
@vanzin I'm not sure I have got your suggestion right, but as I wrote in
JIRA, I think this is not the Java side problem.
https://issues.apache.org/jira/browse
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5227#issuecomment-87903556
Java application means the launcher
`launcher\src\main\java\org\apache\spark\launcher\Main.java`.
---
If your project is set up for it, you can reply to this email
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/5227
[SPARK-6435] spark-shell --jars option does not add all jars to classpath
Modified to accept double-quotated args properly in spark-shell.cmd.
You can merge this pull request into a Git repository
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5227#issuecomment-88820468
oops, forgot to include fixed test code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/5328
[SPARK-6673] spark-shell.cmd can't start in Windows even when spark was
built
added equivalent script to load-spark-env.sh
You can merge this pull request into a Git repository by running
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5227#issuecomment-88761331
Ah @vanzin, I didn't understand your suggestion. `CommandBuilderUtils`
needs modified to escape comma.
But I think we still need to modify `spark-class2.cmd` as well
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5328#issuecomment-88838658
This problem is introduced by
https://github.com/apache/spark/commit/e3eb393961051a48ed1cac756ac1928156aa161f
https://issues.apache.org/jira/browse/SPARK-6406
So
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5227#discussion_r27644574
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -260,15 +260,14 @@ static String quoteForBatchScript(String arg
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5227#discussion_r27645187
--- Diff: launcher/src/main/java/org/apache/spark/launcher/Main.java ---
@@ -101,12 +101,9 @@ public static void main(String[] argsArray) throws
Exception
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5227#discussion_r27645518
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -260,15 +260,14 @@ static String quoteForBatchScript(String arg
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5347#issuecomment-89237471
This PR requires #5227 merged.
(https://issues.apache.org/jira/browse/SPARK-6435)
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/5347
[SPARK-6568] spark-shell.cmd --jars option does not accept the jar that has
space in its path
escape spaces in the arguments.
You can merge this pull request into a Git repository by running
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5447#discussion_r28302293
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1659,9 +1659,14 @@ private[spark] object Utils extends Logging {
val
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5227#issuecomment-95798208
I was checking about the `SparkLauncherSuite` on Windows as vanzin's
comment, and faced some trouble. It seems not to related with this PR, but I'm
not sure yet. Please
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5447#discussion_r29036669
--- Diff: core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala ---
@@ -82,7 +82,7 @@ object PythonRunner {
sspark-submit is currently
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5447#issuecomment-95045723
I tested only on Windows, but I noticed I get different results on Linux.
This is because...
On Windows:
```
scala new File(C:\\path\\to\\file.txt).toURI
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5447#issuecomment-95491028
The problem is that the result is different on Windows and Linux even if
input path strings are exactly the same. We can't use the same test code.
---
If your project
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5227#discussion_r29132469
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -260,15 +260,14 @@ static String quoteForBatchScript(String arg
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5227#issuecomment-96598393
The problem I mentioned was that the spark-shell.cmd which is called by
`SparkLauncherSuite` somehow failed to launch test application.
It turned out to be caused
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5447#issuecomment-100205421
I'm so sorry to leave it for a very long time. I modified it as your
comments.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/5447#discussion_r29933238
--- Diff:
repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -206,7 +206,8 @@ class SparkILoop(
// e.g. file:/C:/my
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/5447#issuecomment-101337444
@vanzin Thank you for your comments.
About Windows path, you're right. Someone might write down like
`C:/foo/bar` though `/` is not a correct path separator
1 - 100 of 114 matches
Mail list logo