Github user jeanlyn commented on the issue:
https://github.com/apache/spark/pull/11228
@tdas I had added a unit test to this PR, could you take some time to
review ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jeanlyn commented on the issue:
https://github.com/apache/spark/pull/11228
@tdas OK, i will try to add unit test these day.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jeanlyn commented on the issue:
https://github.com/apache/spark/pull/11228
@vanzin Sorry for the late reply, I had solved the conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12150#issuecomment-206053030
OK.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/12150
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12150#issuecomment-205341839
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12091#issuecomment-205334309
@andrewor14 Sorry for the late reply, I had submitted a patch for 1.6 #12150
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12150#issuecomment-205334579
/cc @andrewor14 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/12150
[SPARK-14243][CORE][BACKPORT-1.6]update task metrics when removing blocks
## What changes were proposed in this pull request?
This patch try to update the `updatedBlockStatuses ` when
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12091#issuecomment-204006712
cc @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/12091
update task metrics when removing blocks
## What changes were proposed in this pull request?
This PR try to use `incUpdatedBlockStatuses ` to update the
`updatedBlockStatuses ` when
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12028#issuecomment-203173603
OK.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/12028
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/12028#issuecomment-202777494
/cc @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/12028
[SPARK-13845][CORE][Backport-1.6]Using onBlockUpdated to replace onTaskEnd
avioding driver OOM
## What changes were proposed in this pull request?
We have a streaming job using
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11679#issuecomment-202653414
OK, I will fill a JIRA later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11779#issuecomment-202649212
Sure. I will submit a patch against branch-1.6
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/11779
[SPARK-13845][CORE]Using onBlockUpdated to replace onTaskEnd avioding
driver OOM
## What changes were proposed in this pull request?
We have a streaming job using `FlumePollInputStream
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11779#issuecomment-197786170
This PR is the same as #11679 , but i came across with some accidents when
rebasing the PR. So i create a new one.
/cc @andrewor14
---
If your project is set up
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11679#issuecomment-197674341
All test failure is relevant with `HistoryServerSuite`, the reason is we
remove the `onTaskEnd`, and it's used to replay the storage page of history
server fro
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11679#issuecomment-197778717
close this for accident
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/11679
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11679#issuecomment-197127323
I think the the metrics of `updatedBlockStatuses` does not updated using
code like
```
c.taskMetrics().incUpdatedBlockStatuses(Seq((blockId, status
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11679#issuecomment-196617986
@andrewor14 It seems the MIMA failure do not relevant with this patch. Do i
need to fix it in this patch?
---
If your project is set up for it, you can reply to this
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11679#issuecomment-196591027
Thanks @andrewor14 for review. We encounter this issue in branch-1.5, and I
had noticed recent changes of metrics. If i understand correctly, I think the
root cause of
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/11679
[CORE]Using onBlockUpdated to replace onTaskEnd avioding driver OOM
## What changes were proposed in this pull request?
We have a streaming job using `FlumePollInputStream` always driver
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/11440
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11440#issuecomment-190994425
Thanks @jerryshao @srowen @zsxwing for suggestions.I close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11440#issuecomment-190613342
My bad. I will try to figure out the way to fix the when window operations
appear with the config set to true.
---
If your project is set up for it, you can reply to
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11440#issuecomment-190608101
@jerryshao Thanks for the explanation. I see what you mean. It's only
happen in the beginning, and if the stop time is much longer than the window
time, i think
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11440#issuecomment-190568465
Thanks @jerryshao for suggestion!
> Jobs generated in the down time can be used for WAL replay, did you test
when these down jobs are removed, the behavior of
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/11440
[SPARK-13586]add config to skip generate down time batch when restart
StreamingContext
## What changes were proposed in this pull request?
The patch try to add a config
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11228#issuecomment-185001455
@JoshRosen Sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11228#issuecomment-184981662
@tdas @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/11228
[SPARK-13356][Streaming]WebUI missing input informations when recovering
from dirver failure
Issue link:[SPARK-13356](https://issues.apache.org/jira/browse/SPARK-13356)
You can merge this pull
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/10128#issuecomment-169563822
It's difference from join selection, it just pull out nondeterministic
expressions of join condition to the left or right children, but it seems it
can reuse the
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/10128#issuecomment-169524471
@marmbrus you are right. But i think @zhonghaihua 's solution is try to
reduce cartesian product possibility, right?
---
If your project is set up for it, yo
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/10128#issuecomment-161851642
@cloud-fan I think your case is different from @zhonghaihua 's. The sql
only deal with some join keys ('' and null) before shuffle to handle those
poin
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/7927#issuecomment-137046713
It seems that the failure not related.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7535#discussion_r38332979
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -129,6 +128,14 @@ trait CheckAnalysis
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7535#discussion_r38289221
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -129,6 +128,14 @@ trait CheckAnalysis
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/8404#discussion_r37867722
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/QueryPartitionSuite.scala ---
@@ -18,50 +18,54 @@
package org.apache.spark.sql.hive
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36587468
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -590,10 +590,24 @@ private[spark] class BlockManager(
private def
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/7927#issuecomment-128582275
Thanks everyone for the review! I updated the code, and now `doGetRemote`
will accept exception when we still have location to fetch the block avoiding
the work flow
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36302564
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala ---
@@ -443,6 +448,34 @@ class BlockManagerSuite extends SparkFunSuite with
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36294518
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala ---
@@ -443,6 +448,37 @@ class BlockManagerSuite extends SparkFunSuite with
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36217402
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -592,8 +592,14 @@ private[spark] class BlockManager(
val locations
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36182323
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala ---
@@ -443,6 +443,21 @@ class BlockManagerSuite extends SparkFunSuite with
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/7927#discussion_r36182313
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -592,8 +592,14 @@ private[spark] class BlockManager(
val locations
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/7927
[SPARK-9591][CORE]Job may fail for exception during getting broadcast
variable
[SPARK-9591](https://issues.apache.org/jira/browse/SPARK-9591)
When we getting the broadcast variable, we can
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/5717#discussion_r32891298
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoin.scala
---
@@ -82,86 +130,169 @@ case class SortMergeJoin
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/5717#discussion_r32891269
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoin.scala
---
@@ -82,86 +130,169 @@ case class SortMergeJoin
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/5717#discussion_r32891271
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala ---
@@ -90,13 +90,12 @@ private[sql] abstract class SparkStrategies
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/6833#issuecomment-112636409
@chenghao-intel ,I think it only affect the dynamic partition.Because
`SparkHadoopWriter` get the write by `OutputFormat.getRecordWriter`,most of
them use the
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/6833#discussion_r32592438
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveWriterContainers.scala ---
@@ -230,7 +230,15 @@ private[spark] class
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/6833#discussion_r32492419
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -197,7 +197,6 @@ case class InsertIntoHiveTable
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/6833#discussion_r32491951
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -197,7 +197,6 @@ case class InsertIntoHiveTable
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/6833
[SPARK-8379][SQL]avoid speculative tasks write to the same file
The issue link
[SPARK-8379](https://issues.apache.org/jira/browse/SPARK-8379)
Currently,when we insert data to the dynamic
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/6682
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/6682#issuecomment-109871137
@yhuai .Yes,the full outer join cases shuffled the null key to the same
reducer in spark-sql ,and the hive plan generated like:
```sql
explain select a.value
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/6682#issuecomment-109777121
@yhuai ,Thanks for comment.In the current implementation of
`join(BinaryNode)` in master just simply use the one side partitioning as its
partitioning to judge whether
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/6682#issuecomment-109564743
cc @yhuai @chenghao-intel
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/6682
[SPARK-2205][SPARK-7871][SQL]Advoid redundancy exchange
When only use the output partitioning of `BinaryNode` will probably add
unnecessary `Exchange` like multiway join.
This PR add
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/6563#discussion_r31490129
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/VersionsSuite.scala ---
@@ -37,6 +38,48 @@ class VersionsSuite extends SparkFunSuite with
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5876#issuecomment-107459171
I can use the build in class(0.13.1) to connect `0.12.0` metastore
correctly except some warn and error which does not effect running
```
5/06/01 21:20:09 WARN
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5876#issuecomment-107415482
I set the metastore to `0.12.0` by follow steps ,but get classnotdef
exception:
* I chang the `spark.sql.hive.metastore.version` in `spark-defaults.conf`
to `0.12.0
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/6426#issuecomment-107059001
Thanks @rxin for informations.I close this PR for now and will reopen it
once be optimized.
---
If your project is set up for it, you can reply to this email and have
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/6426
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/6413#discussion_r31158220
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala ---
@@ -32,6 +32,26 @@ import org.apache.spark.sql.types
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/6426#issuecomment-105759184
Thanks @JoshRosen @rxin for comment.I had meet at lease two group by cases
of our production environment run long GC time and finally executor crash.These
case has a
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/6426
[SPARK-7885][SQL]add config to control map aggregation in spark sql
[SPARK-7885](https://issues.apache.org/jira/browse/SPARK-7885),we add
`spark.sql.partialAggregation.enable`,it's true by de
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/3438#discussion_r30295775
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/sort/SortShuffleReader.scala ---
@@ -0,0 +1,337 @@
+/*
+ * Licensed to the Apache Software
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5220#issuecomment-86787963
@chenghao-intel ,I think #5198 had fixed the problem
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/4586#discussion_r26954661
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -76,7 +77,8 @@ class HadoopTableReader(
override def
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/5079
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-85085332
After communicated with @adrian-wang offline. I realized this PR still
leave some class loader problem.So i close this one.
---
If your project is set up for it, you
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83383402
I also don't have CHAR in `mapjoin_addjar.q`. I only find one
`mapjoin_addjar.q`,and the path of my file is
sql/hive/src/test/resources/ql/src/test/qu
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83372419
@chenghao-intel my full code is
```java
import org.apache.hadoop.hive.ql.exec.UDF;
public class hello extends UDF {
public String evaluate(String
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83348085
@adrian-wang ,I had tested in `spark-sql` ,and get result correctly with my
test case. Can you provide your test case?By the way,when i debug this issue i
found in the
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83299775
@adrian-wang You mean not work in `spark-shell` ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83267913
@chenghao-intel I am not clear what problem #4586 try to fix.If #4586 try
to fix the problem as I mentioned.I think we can reuse the
`SparkContext.addJar` is enough to
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-83254407
@yhuai ,There is a simple functions
```java
public String evaluate(String str) {
try {
return "hello " + str;
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-82976538
Ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-82970359
Thanks @liancheng for explain.You are right,it need consider more about
it.So,should i remove the test?
---
If your project is set up for it, you can reply to this
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-82926819
Hi, @marmbrus ,I had update the code as you mentioned about.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/5079#issuecomment-82908034
Updated, @liancheng @marmbrus I had tried to add a test for this
patch,could you take a look for the test?
---
If your project is set up for it, you can reply to this
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/5079
[SPARK-6392][SQL]Minor fix ClassNotFound exception when use spark cli to
add jar
When we use spark cli to add jar dynamic,we will get the
`java.lang.ClassNotFoundException` when we use the class
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/4586#discussion_r26595976
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -263,7 +263,8 @@ private[hive] class
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-82383269
Updated, @marmbrus @chenghao-intel . We had tested this patch in our
environment over the past few days.Any more problems in this patch?
---
If your project is set up
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/3907#issuecomment-75260761
OK.I close this one
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/3891#issuecomment-75260856
OK.I close this one
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/3907
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/3891
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-73877412
/cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-73868381
@chenghao-intel ,I had pass all unit test in my local .But i think the unit
test of thrift-server seems unstable,it's depend on the state of the
machine,when the ma
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-73837462
Hi,@marmbrus , @chenghao-intel I have no idea why `SPARK-4407 regression:
Complex type support` this test failed after i resolved the merge conflicts.It
seems that not
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-73836885
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-73459718
Thanks @chenghao-intel for review and suggestions!I take some of your
advises to simplify the code.
---
If your project is set up for it, you can reply to this email
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-73176093
hi,@chenghao-intel @marmbrus any suggestions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/4289#discussion_r24139046
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -264,15 +268,31 @@ private[hive] object HadoopTableReader extends
1 - 100 of 118 matches
Mail list logo