Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/13130
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/12700
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-222502337
The size of JobConf is about 124k and the memory of my dirver is 10g (>
124k * 1 = 1.18g), so it is work. @rxin
---
If your project is set up for it, you
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-221146422
I'm confused about it too
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-220497354
cc @yhuai @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-220273234
I refer to other codes using the maximumSize method @akohli
```
/** A cache of Spark SQL data source tables that have been accessed. */
protected[hive
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-219905068
It's ok. If the size > 1000, the new values will replace the old values.
@akohli
---
If your project is set up for it, you can reply to this email and have y
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13130#discussion_r63453073
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -363,7 +363,7 @@ private[spark] object HadoopRDD extends Logging
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/13130
[SPARK-15340][SQL]Limit the size of the map used to cache JobConfs to void
OOM
# What changes were proposed in this pull request?
limit the size of the map used to cache JobConfs
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/12700#issuecomment-214983252
Now ,i just know that corrupted shuffle file could caused this problem, but
i do not know why shufflle file is corrupted. @jerryshao @viper-kun
---
If your
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/12700#issuecomment-214943061
I find that some task recompute before FAILED_TO_UNCOMPRESS happened and
think that something like https://github.com/apache/spark/pull/9610 caused this
problem
GitHub user DoingDone9 reopened a pull request:
https://github.com/apache/spark/pull/12700
[SPARK-4105][CORE] regenerate the shuffle file when it is corrupted
I find that some task recompute before FAILED_TO_UNCOMPRESS happened,and I
think that retry operation Corrupted shuffle
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/12700
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/12700#issuecomment-214940051
yeah, I haven't found the root-cause yet and been troubled by this problem
for a long time. Any idea for this problem @srowen
---
If your project is set up
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/12700
[SPARK-4105][Core] regenerate the shuffle file when it is corrupted
I find that some task recompute before FAILED_TO_UNCOMPRESS happened,and I
think that retry operation Corrupted shuffle file
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/9113#issuecomment-203886680
good job!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/9951#issuecomment-159529657
Is it OK? @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/9951#issuecomment-159525672
```
shutdownDeletePaths.foreach { dirPath =>
try {
logInfo("Deleting directory " + dirPath)
Utils.deleteRecursi
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/9951#issuecomment-159527936
Ok, get it @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/9951#issuecomment-159521038
It can not delete all element of shutdownDeletePaths.
Like the example above, this method can not delete all element of a.
---
If your project is set up
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/9951
[SPARK-11974][CORE]Not all the temp dirs had been deleted when the JVM exits
deleting the temp dir like that
```
val a = mutable.Set(1,2,3,4,7,0,98,9,8)
a.foreach(x
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/7118#issuecomment-154654481
@navis
when i use spark-sql and run sql like that
"
add jar /home/udf-0.0.1-SNAPSHOT.jar;
create temporary function arr_greater_
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/7209#issuecomment-119083697
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6351#issuecomment-111042808
@marmbrus @yhuai @scwf /cc
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/6351
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user DoingDone9 reopened a pull request:
https://github.com/apache/spark/pull/6351
[SPARK-7824][SQL]Collapsing operator reordering and constant folding into a
single batch to push down the single side.
SQL
```
select * from tableA join tableB on (a 3 and b = d
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/5553
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6351#issuecomment-108669840
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/6410
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6351#issuecomment-105740359
@marmbrus /cc
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6410#issuecomment-105740488
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6410#issuecomment-105707355
Cancel user role in permission control
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6351#issuecomment-10575
yes, it can work. but this batch will have two different types of
optimizers. @marmbrus
---
If your project is set up for it, you can reply to this email and have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6351#issuecomment-105458594
@marmbrus @scwf
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/6410
[SPARK-7867][SQL] Support revoke role ...
SQL like
```
revoke role role_a from user user1;
```
You can merge this pull request into a Git repository by running:
$ git pull https
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6410#issuecomment-105510479
@scwf
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/6351#issuecomment-104817104
This optimizer can void CartesianProduct
Tables
```
tableAtableBtableC
a int cint fint
b int d
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/6351
[SPARK-7824][SQL] Extracting and/or condition optimizer from
BooleanSimplification optimizer and put it before PushPredicateThroughJoin
optimizer to push down the single side.
SQL
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5972#issuecomment-100823894
@marmbrus @yhuai @scwf
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/5538
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5972
[SPARK-7437][SQL] Fold literal in (item1, item2, ..., literal, ...) into
false directly if not in.
Just Fold literal in (item1, item2, ..., literal, ...) into true directly
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/5972
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user DoingDone9 reopened a pull request:
https://github.com/apache/spark/pull/5972
[SPARK-7437][SQL] Fold literal in (item1, item2, ..., literal, ...) into
false directly if not in.
Just Fold literal in (item1, item2, ..., literal, ...) into true directly
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5972#discussion_r29911099
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -293,7 +293,15 @@ object ConstantFolding extends
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5972#issuecomment-99842905
@scwf
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5972#discussion_r29912494
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -46,13 +46,13 @@ object DefaultOptimizer extends
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5972#discussion_r29912351
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -110,6 +110,7 @@ case class InSet(value
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5770#issuecomment-97353247
@rxin i have added a test fot this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5770
[SPARK-7225][SQL] CombineLimits optimizer does not work
SQL
```
select key from (select key from src limit 100) t2 limit 10
```
Optimized Logical Plan before modifying
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5797
[SPARK-7267][SQL]Push down Project when it's child is Limit
SQL
```
select key from (select key,value from t1 limit 100) t2 limit 10
```
Optimized Logical Plan before modifying
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5770#issuecomment-97645345
git it @marmbrus @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user DoingDone9 reopened a pull request:
https://github.com/apache/spark/pull/5553
[SPARK-6976][SQL] drop table if exists src print ERROR info that should
not be printed when src not exists.
If table src not exists and run sql drop table if exists src, then some
ERROR info
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28933363
--- Diff:
sql/hive/v0.13.1/src/main/scala/org/apache/spark/sql/hive/Shim13.scala ---
@@ -218,7 +218,13 @@ private[hive] object HiveShim
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28933263
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/5418
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28751859
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28838875
--- Diff:
sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala ---
@@ -135,7 +135,13 @@ private[hive] object HiveShim
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5610#issuecomment-95004242
Get it. This is not a good solution, I will close it . maybe i can do
something in LeftSemiJoinHash and BroadcastLeftSemiJoinHash. thank you
@marmbrus
---
If your
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28840547
--- Diff:
sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala ---
@@ -135,7 +135,13 @@ private[hive] object HiveShim
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/5610
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28839749
--- Diff:
sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala ---
@@ -135,7 +135,13 @@ private[hive] object HiveShim
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28840911
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5610
[SPARK-7026][SQL] make LeftSemiJoin work when it has both equal condition
and not equal condition
when leftsemijoin has both equal condition and not equal condition, it can
not work. sql like
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28746146
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5538#issuecomment-94635160
@chenghao-intel your idea is good but âselect current_databaseâ is
syntax of hive. and i want to implemente it . And this UDF do not run within
executor(s
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5538#issuecomment-94629707
@chenghao-intel i konw this method that can get dbName, but it can only be
used with CLI. It is necessary to get dbName without cli. And i have explained
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5538#issuecomment-94617955
sorry, i changed the code then the comment disappear, i will add it again.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28746101
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5553
[SPARK-6976][SQL] drop table if exists src print ERROR info that should
not be printed when src not exists.
If table src not exists and run sql drop table if exists src, then some
ERROR info
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/5553
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5538#discussion_r28498708
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/sqlUDFCurrentDB.scala ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5538
[SPARK-6198][SQL] Support select current_database()
to support select current_database()
```
The method(evaluate) has changed in UDFCurrentDB, it just throws a
exception.But hiveUdfs
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5538#issuecomment-93625012
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-93624977
I have opened a new pr https://github.com/apache/spark/pull/5538
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-92599430
I do not agree that. Because this expression is foldable, then it will be
computed in ConstantFolding of Optimizer. So I will get the name of currentDB
after
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-92159014
my previous test was successful and i will test it again. Thank you
@marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5166#issuecomment-92159074
i agree with you. close it. Thank you @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-92189086
Could you tell me how you got this exception? I test with three nodes , and
it works again. Thank you @marmbrus
---
If your project is set up for it, you can reply
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5418
[SPARK-6768][SQL] Do not support float/double union decimal or decimal(a
,b) union decimal(c, d)
Do not support sql like that
```
select cast(12.2056999 as float) from testData limit 1
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/3931
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5198#issuecomment-86416015
ok i will change @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5198#issuecomment-86363210
you are right @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-86384179
yes, it works. I have tested it in the distributed mode with two nodes.
@rxin
---
If your project is set up for it, you can reply to this email and have your
reply
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5131#issuecomment-85914215
I have added a test for it @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5198
A little spell wrong, but this will make spark compile failed!!
wrong code : val tmpDir = Files.createTempDir()
not Files should File
You can merge this pull request into a Git repository
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4995#issuecomment-86329564
anyone will test it ? @marmbrus @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/5198#issuecomment-86324403
i think this is pressing @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5166
[SPARK-6493][SQL]Support numeric(a,b) in the sqlContext
support sql like that :
select cast(20.12 as numeric(4,2)) from src limit 1;
You can merge this pull request into a Git repository
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/5131
[SPARK-6049][SQL] It is not necessary that avoid old inteface of hive,
because this will make some UDAF can not work.
spark avoid old inteface of hive, then some udaf can not work like
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4993#issuecomment-78838440
/cc @sryza @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 closed the pull request at:
https://github.com/apache/spark/pull/4926
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4926#issuecomment-78447224
I have opened a new pr for this .I create a new UDF and register it
instead of intercepting code.
https://github.com/apache/spark/pull/4995 @chenghao-intel
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/4995
[SPARK-6198][SQL] Support select current_database()
The method(evaluate) has changed in UDFCurrentDB, it just throws a
exception.But hiveUdfs call this method and failed.
@Override
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4959#issuecomment-78453711
/CC @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4973#issuecomment-78453677
/CC @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4993#issuecomment-78783053
I have changed it. Please test it. @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4902#issuecomment-78462968
/cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4993#issuecomment-78462887
I have added the test, please test it. @sryza
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-78463251
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/4993
[SPARK-6300][Spark Core] sc.addFile(path) does not support the relative
path.
when i run cmd like that sc.addFile(../test.txt), it did not work and
throwed an exception
1 - 100 of 147 matches
Mail list logo