Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-185480094
@vanzin : yes =(
At least now we have agreement on the approach how to do it in standalone,
I will move the discussions to mailing list. Was not my intention
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-185476018
Seemed like now when Executor trying to access HDFS it gets error:
```
Failed on local exception: java.io.IOException
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-185456460
Ah, I need to clarify, when I said "there is a way or process to do "kinit"
for each user" I meant just that. The process of doing kinit or se
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-185450393
Yes, I think @tgravescs proposal above including moving the key tabs and
token to other users.
What I was agreeing was to add fix for that broke SPARK-2541, which
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-185443405
@vanzin : as @tgravescs mentioned, it did not work anymore hence the need
to fix it.
All the PRs and JIRAs for this issue had been closed to point to #4106
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-184978890
@pwendell : The alternative PR (aka #4106) also being closed. I think you
and @tgravescs have narrowed down the use cases to allow standalone Spark to
have access
Github user hsaputra commented on a diff in the pull request:
https://github.com/apache/spark/pull/1506#discussion_r15251850
--- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
@@ -215,9 +215,15 @@ object SparkEnv extends Logging {
MapOutputTracker
GitHub user hsaputra opened a pull request:
https://github.com/apache/spark/pull/1424
[SPARK-2500] Move the logInfo for registering BlockManager to
BlockManagerMasterActor.register method
PR for SPARK-2500
Move the logInfo call for BlockManager
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/1424#issuecomment-49122966
Thx as usual @rxin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/1157#issuecomment-46910287
Somehow the build is timed out, maybe running out resource to execute
tests? =(
---
If your project is set up for it, you can reply to this email and have your
reply
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/1157#issuecomment-46918929
Thanks @rxin ! =)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user hsaputra opened a pull request:
https://github.com/apache/spark/pull/1157
Cleanup on Connection, ConnectionManagerId, ConnectionManager classes part 2
Cleanup on Connection, ConnectionManagerId, and ConnectionManager classes
part 2 while I was working at the code
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/1060#issuecomment-45837768
Thx @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user hsaputra opened a pull request:
https://github.com/apache/spark/pull/1060
Cleanup on Connection and ConnectionManager
Simple cleanup on Connection and ConnectionManager to make IDE happy while
working of issue:
1. Replace var with var
2. Add parentheses to Queue
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/993#issuecomment-45349273
HI @marmbrus, one generic comment, could you add object or class header
comment to describe why each of them needed and the context why they are used.
It should be very
Github user hsaputra commented on a diff in the pull request:
https://github.com/apache/spark/pull/993#discussion_r13495163
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/GeneratedRow.scala
---
@@ -0,0 +1,833 @@
+/*
+ * Licensed
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/953#issuecomment-45002565
JIRA filed: https://issues.apache.org/jira/browse/SPARK-2001
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user hsaputra opened a pull request:
https://github.com/apache/spark/pull/953
SPARK-2001 : Remove docs/spark-debugger.md from master
Per discussion in dev list:
Seemed like the spark-debugger.md is no longer accurate (see
http://spark.apache.org/docs/latest/spark
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/867#issuecomment-44079360
Hi,
Please add more description on what is the desired affect.
It would be also appreciated if you file ASF JIRA ticket [1] in addition to
help trace
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/488#issuecomment-41114194
+1
Looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/344#issuecomment-40260830
Cool! Thanks @tgravescs
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user hsaputra opened a pull request:
https://github.com/apache/spark/pull/358
Remove extra semicolon in import statement and unused import in
ApplicationMaster
Small nit cleanup to remove extra semicolon and unused import in Yarn's
stable ApplicationMaster (it bothers me
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/358#issuecomment-39819023
@srowen there has been PRs to remove unnecessary semicolons and imports
before and I believe we had remove most of them.
Probably the best way to have some kind
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/358#issuecomment-39851403
Seems like Jenkins had build error related to some PySpark file parsing.
Can we try it again?
---
If your project is set up for it, you can reply to this email
Github user hsaputra commented on a diff in the pull request:
https://github.com/apache/spark/pull/344#discussion_r11397585
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -112,7 +112,12 @@ private[spark] class SparkUI(
logInfo(Stopped Spark Web UI
Github user hsaputra commented on a diff in the pull request:
https://github.com/apache/spark/pull/344#discussion_r11401846
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -112,7 +112,12 @@ private[spark] class SparkUI(
logInfo(Stopped Spark Web UI
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/358#issuecomment-39911243
Thx!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/146#issuecomment-38185239
Hi @pwendell, how is the doc you posted [1] differ from the one posted by
@marmbrus at
http://www.cs.berkeley.edu/%7Emarmbrus/sparkdocs/_site/sql-programming-guide.html
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/146#issuecomment-38199943
Ah got it, thanks for the info @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
29 matches
Mail list logo