[GitHub] spark issue #17893: FileFormatWriter wrap the FetchFailedException which bre...

2017-05-07 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/17893
  
Pending. I am formatting the pull name~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17893: FileFormatWriter wrap the FetchFailedException wh...

2017-05-07 Thread lshmouse
GitHub user lshmouse opened a pull request:

https://github.com/apache/spark/pull/17893

FileFormatWriter wrap the FetchFailedException which breaks job's failover

## What changes were proposed in this pull request?
Handle the fetch failed exception separately in FileFormatWriter.

## How was this patch tested?
manual tests

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lshmouse/spark FileFormatWriter

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/17893.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17893


commit c1a635e722e36714582ab10ec04a361ff67c3aa5
Author: Liu Shaohui <liushao...@xiaomi.com>
Date:   2017-05-05T08:58:23Z

FileFormatWriter wrap the FetchFailedException which breaks the failure 
recovery chain

commit c869d9c7acfe4fe9c43070185cbe303241248f08
Author: Liu Shaohui <liushao...@xiaomi.com>
Date:   2017-05-08T01:19:20Z

Fix bugs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #16602: [SPARK-19238][GRAPHX] Ignore sorting the edges if edges ...

2017-01-16 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/16602
  
@srowen 
After checking the implements of timsort, it has been optimized for sorted 
array. 
Just ignore this PR. Thanks for your time.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16602: [SPARK-19238][GRAPHX] Ignore sorting the edges if...

2017-01-16 Thread lshmouse
Github user lshmouse closed the pull request at:

https://github.com/apache/spark/pull/16602


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16602: SPARK-19238: Ignore sorting the edges if edges ar...

2017-01-16 Thread lshmouse
GitHub user lshmouse opened a pull request:

https://github.com/apache/spark/pull/16602

SPARK-19238: Ignore sorting the edges if edges are sorted when building 
edge partition


## What changes were proposed in this pull request?
Ignore sorting the edges if edges are sorted when building edge partition

Usually the graph edges generated by upstream application and saved by 
other graphs are sorted. So the sorting is not necessary.

## How was this patch tested?
unit tests

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lshmouse/spark SPARK-19238

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16602.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16602


commit 12448e9bbe62b8b82280b2b0754997c1187a7851
Author: Liu Shaohui <liushao...@xiaomi.com>
Date:   2017-01-16T09:38:40Z

SPARK-19238: Ignore sorting the edges if edges are sorted when building 
edge partition




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13706: [SPARK-15988] [SQL] Implement DDL commands: Create/Drop ...

2016-11-30 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/13706
  
@lianhuiwang 
I think the problem is that no need to check if macroFunction is resolved. 
Data type may be cast dynamically according the sql data type.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13706: [SPARK-15988] [SQL] Implement DDL commands: Create/Drop ...

2016-11-30 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/13706
  
@lianhuiwang 

Just a feedback.  With this patch, creating a MACRO throws the following 
exception.
Any suggestion? I am trying to debug it.

```
16/11/30 16:59:18 INFO execution.SparkSqlParser: Parsing command: CREATE 
TEMPORARY MACRO flr(time_ms bigint) FLOOR(time_ms/1000/3600)*3600
16/11/30 16:59:18 ERROR thriftserver.SparkExecuteStatementOperation: Error 
executing query, currentState RUNNING, 
org.apache.spark.sql.AnalysisException: Cannot resolve 
'(FLOOR(((boundreference() / 1000) / 3600)) * 3600)' for CREATE TEMPORARY MACRO 
flr, due to data type mismatch: differing types in '(FLOOR(((boundreference() / 
1000) / 3600)) * 3600)' (bigint and int).;
  at 
org.apache.spark.sql.execution.command.CreateMacroCommand.run(macros.scala:70)  

  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
  
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
 
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:120)
  
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:120)
  
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:141)

  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)  

  at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:138)  

  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:119)  
 
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
 
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)

  at org.apache.spark.sql.Dataset.(Dataset.scala:186) 
 
  at org.apache.spark.sql.Dataset.(Dataset.scala:167) 
 
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65) 
 
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)  
 
  at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)  
 
  at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:221)
  at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:165)
  at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:162)
  at java.security.AccessController.doPrivileged(Native Method) 
 
  at javax.security.auth.Subject.doAs(Subject.java:415) 
 
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1854)

  at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:175)
  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 

  at java.util.concurrent.FutureTask.run(FutureTask.java:262)   
 
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 

  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 

  at java.lang.Thread.run(Thread.java:745)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14561: [SPARK-16972][CORE] Move DriverEndpoint out of CoarseGra...

2016-08-09 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/14561
  
@jerryshao 
The patch doesn't just change the code structure. It distinguish the 
responsibility of these two classes.
What's more, it's the first step to refactor the schedule path, because the 
codes of TaskSchedulerImpl and CoarseGrainedSchedulerBackend depend on each 
other and are hard to understand and debug.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14561: [SPARK-16972][CORE] Move DriverEndpoint out of CoarseGra...

2016-08-09 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/14561
  
@zsxwing @srowen 
Could you  please help to review this patch? Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14561: [SPARK-16972][CORE] Move DriverEndpoint out of CoarseGra...

2016-08-09 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/14561
  
@srowen Please help to trigger the Jenkins test, Thanks~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14561: [SPARK-16972][CORE] Move DriverEndpoint out of CoarseGra...

2016-08-09 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/14561
  
Jenkins test this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14561: [SPARK-16972][CORE] Move DriverEndpoint out of CoarseGra...

2016-08-09 Thread lshmouse
Github user lshmouse commented on the issue:

https://github.com/apache/spark/pull/14561
  
Jenkins test this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14561: SPARK-16972: Move DriverEndpoint out of CoarseGra...

2016-08-09 Thread lshmouse
GitHub user lshmouse opened a pull request:

https://github.com/apache/spark/pull/14561

SPARK-16972: Move DriverEndpoint out of CoarseGrainedSchedulerBackend

## What changes were proposed in this pull request?
Move DriverEndpoint out of CoarseGrainedSchedulerBackend and make the two 
classes clean.

## How was this patch tested?
Pass the unit tests in local.

(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lshmouse/spark DriverEndpoint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14561.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14561


commit def695421948db1efd0418625243ed645d0958fa
Author: Liu Shaohui <liushao...@xiaomi.com>
Date:   2016-08-09T09:25:41Z

SPARK-16972: Move DriverEndpoint out of CoarseGrainedSchedulerBackend




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org