[jira] [Updated] (LIVY-646) Travis failed on rsc unit tests: RSCClient instance stopped

2019-08-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-646:

Summary: Travis failed on rsc unit tests: RSCClient instance stopped  (was: 
Travis failed on rsc unit tests)

> Travis failed on rsc unit tests: RSCClient instance stopped
> ---
>
> Key: LIVY-646
> URL: https://issues.apache.org/jira/browse/LIVY-646
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
>  
>  Running org.apache.livy.rsc.TestSparkClient
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.861 sec 
> <<< FAILURE! - in org.apache.livy.rsc.TestSparkClient
> testBypass(org.apache.livy.rsc.TestSparkClient) Time elapsed: 0.099 sec <<< 
> ERROR!
> java.util.concurrent.ExecutionException: java.io.IOException: RSCClient 
> instance stopped.
>  at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:588)
>  at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575)
>  at 
> org.apache.livy.rsc.TestSparkClient.runBypassTest(TestSparkClient.java:462)
>  at org.apache.livy.rsc.TestSparkClient.testBypass(TestSparkClient.java:453)
> Caused by: java.io.IOException: RSCClient instance stopped.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-646) Travis failed on rsc unit tests: RSCClient instance stopped

2019-08-29 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-646:

Description: 
 

 Running org.apache.livy.rsc.TestSparkClient
 Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.861 sec 
<<< FAILURE! - in org.apache.livy.rsc.TestSparkClient
 testBypass(org.apache.livy.rsc.TestSparkClient) Time elapsed: 0.099 sec <<< 
ERROR!
 java.util.concurrent.ExecutionException: java.io.IOException: RSCClient 
instance stopped.
 at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:588)
 at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575)
 at org.apache.livy.rsc.TestSparkClient.runBypassTest(TestSparkClient.java:462)
 at org.apache.livy.rsc.TestSparkClient.testBypass(TestSparkClient.java:453)
 Caused by: java.io.IOException: RSCClient instance stopped.

 

another report: https://travis-ci.org/runzhiwang/incubator-livy/jobs/578200048

  was:
 

 Running org.apache.livy.rsc.TestSparkClient
Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.861 sec <<< 
FAILURE! - in org.apache.livy.rsc.TestSparkClient
testBypass(org.apache.livy.rsc.TestSparkClient) Time elapsed: 0.099 sec <<< 
ERROR!
java.util.concurrent.ExecutionException: java.io.IOException: RSCClient 
instance stopped.
 at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:588)
 at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575)
 at org.apache.livy.rsc.TestSparkClient.runBypassTest(TestSparkClient.java:462)
 at org.apache.livy.rsc.TestSparkClient.testBypass(TestSparkClient.java:453)
Caused by: java.io.IOException: RSCClient instance stopped.


> Travis failed on rsc unit tests: RSCClient instance stopped
> ---
>
> Key: LIVY-646
> URL: https://issues.apache.org/jira/browse/LIVY-646
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
>  
>  Running org.apache.livy.rsc.TestSparkClient
>  Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.861 sec 
> <<< FAILURE! - in org.apache.livy.rsc.TestSparkClient
>  testBypass(org.apache.livy.rsc.TestSparkClient) Time elapsed: 0.099 sec <<< 
> ERROR!
>  java.util.concurrent.ExecutionException: java.io.IOException: RSCClient 
> instance stopped.
>  at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:588)
>  at org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575)
>  at 
> org.apache.livy.rsc.TestSparkClient.runBypassTest(TestSparkClient.java:462)
>  at org.apache.livy.rsc.TestSparkClient.testBypass(TestSparkClient.java:453)
>  Caused by: java.io.IOException: RSCClient instance stopped.
>  
> another report: https://travis-ci.org/runzhiwang/incubator-livy/jobs/578200048



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-656) Travis failed on "start a repl session using the rsc"

2019-09-01 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-656:

Description: 
- start a repl session using the rsc *** FAILED *** (37 seconds, 374 
milliseconds)
 java.lang.RuntimeException: java.util.concurrent.TimeoutException
 at org.apache.livy.rsc.Utils.propagate(Utils.java:60)
 at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:228)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)
 ...
 Cause: java.util.concurrent.TimeoutException:
 at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:56)
 at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:223)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)

 

please reference to 
https://travis-ci.org/runzhiwang/incubator-livy/jobs/579610195

  was:
- start a repl session using the rsc *** FAILED *** (37 seconds, 374 
milliseconds)
 java.lang.RuntimeException: java.util.concurrent.TimeoutException
 at org.apache.livy.rsc.Utils.propagate(Utils.java:60)
 at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:228)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)
 ...
 Cause: java.util.concurrent.TimeoutException:
 at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:56)
 at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:223)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)


> Travis failed on "start a repl session using the rsc"
> -
>
> Key: LIVY-656
> URL: https://issues.apache.org/jira/browse/LIVY-656
> Project: Livy
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> - start a repl session using the rsc *** FAILED *** (37 seconds, 374 
> milliseconds)
>  java.lang.RuntimeException: java.util.concurrent.TimeoutException
>  at org.apache.livy.rsc.Utils.propagate(Utils.java:60)
>  at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:228)
>  at 
> org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
>  at 
> org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
>  at 
> org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
>  at 
> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  ...
>  Cause: java.util.concurrent.TimeoutException:
>  at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:56)
>  at 

[jira] [Commented] (LIVY-657) Travis failed on should not create sessions with duplicate names

2019-09-02 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920651#comment-16920651
 ] 

runzhiwang commented on LIVY-657:
-

I'm working on it.

> Travis failed on should not create sessions with duplicate names
> 
>
> Key: LIVY-657
> URL: https://issues.apache.org/jira/browse/LIVY-657
> Project: Livy
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> should not create sessions with duplicate names *** FAILED *** (17 
> milliseconds)
>  session2.stopped was false (SessionManagerSpec.scala:96)
>  
> please reference to https://travis-ci.org/apache/incubator-livy/jobs/579604782



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (LIVY-657) Travis failed on should not create sessions with duplicate names

2019-09-02 Thread runzhiwang (Jira)
runzhiwang created LIVY-657:
---

 Summary: Travis failed on should not create sessions with 
duplicate names
 Key: LIVY-657
 URL: https://issues.apache.org/jira/browse/LIVY-657
 Project: Livy
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.6.0
Reporter: runzhiwang


should not create sessions with duplicate names *** FAILED *** (17 milliseconds)
 session2.stopped was false (SessionManagerSpec.scala:96)

 

please reference to https://travis-ci.org/apache/incubator-livy/jobs/579604782



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (LIVY-656) Travis failed on "start a repl session using the rsc"

2019-09-01 Thread runzhiwang (Jira)
runzhiwang created LIVY-656:
---

 Summary: Travis failed on "start a repl session using the rsc"
 Key: LIVY-656
 URL: https://issues.apache.org/jira/browse/LIVY-656
 Project: Livy
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.6.0
Reporter: runzhiwang


- start a repl session using the rsc *** FAILED *** (37 seconds, 374 
milliseconds)
 java.lang.RuntimeException: java.util.concurrent.TimeoutException
 at org.apache.livy.rsc.Utils.propagate(Utils.java:60)
 at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:228)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)
 ...
 Cause: java.util.concurrent.TimeoutException:
 at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:56)
 at org.apache.livy.rsc.RSCClient.stop(RSCClient.java:223)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply$mcV$sp(ReplDriverSuite.scala:64)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.apache.livy.repl.ReplDriverSuite$$anonfun$1.apply(ReplDriverSuite.scala:40)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (LIVY-655) Travis failed on “testConnectToRunningContext”

2019-08-28 Thread runzhiwang (Jira)
runzhiwang created LIVY-655:
---

 Summary: Travis failed on “testConnectToRunningContext”
 Key: LIVY-655
 URL: https://issues.apache.org/jira/browse/LIVY-655
 Project: Livy
  Issue Type: Bug
  Components: RSC
Affects Versions: 0.6.0
Reporter: runzhiwang


reference to https://travis-ci.org/runzhiwang/incubator-livy/jobs/578129098



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-655) Travis failed on “testConnectToRunningContext”

2019-08-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-655:

Description: 
Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 70.658 sec <<< 
FAILURE! - in org.apache.livy.rsc.TestSparkClientTests run: 19, Failures: 0, 
Errors: 1, Skipped: 0, Time elapsed: 70.658 sec <<< FAILURE! - in 
org.apache.livy.rsc.TestSparkClienttestConnectToRunningContext(org.apache.livy.rsc.TestSparkClient)
  Time elapsed: 17.455 sec  <<< ERROR!java.lang.RuntimeException: 
java.util.concurrent.TimeoutException at 
org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:590) at 
org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575) at 
org.apache.livy.rsc.TestSparkClient.testConnectToRunningContext(TestSparkClient.java:338)Caused
 by: java.util.concurrent.TimeoutException at 
org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:590) at 
org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575) at 
org.apache.livy.rsc.TestSparkClient.testConnectToRunningContext(TestSparkClient.java:338)

 

reference to [https://travis-ci.org/runzhiwang/incubator-livy/jobs/578129098]

  was:reference to 
https://travis-ci.org/runzhiwang/incubator-livy/jobs/578129098


> Travis failed on “testConnectToRunningContext”
> --
>
> Key: LIVY-655
> URL: https://issues.apache.org/jira/browse/LIVY-655
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 70.658 sec 
> <<< FAILURE! - in org.apache.livy.rsc.TestSparkClientTests run: 19, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 70.658 sec <<< FAILURE! - in 
> org.apache.livy.rsc.TestSparkClienttestConnectToRunningContext(org.apache.livy.rsc.TestSparkClient)
>   Time elapsed: 17.455 sec  <<< ERROR!java.lang.RuntimeException: 
> java.util.concurrent.TimeoutException at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:590) at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575) at 
> org.apache.livy.rsc.TestSparkClient.testConnectToRunningContext(TestSparkClient.java:338)Caused
>  by: java.util.concurrent.TimeoutException at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:590) at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575) at 
> org.apache.livy.rsc.TestSparkClient.testConnectToRunningContext(TestSparkClient.java:338)
>  
> reference to [https://travis-ci.org/runzhiwang/incubator-livy/jobs/578129098]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (LIVY-655) Travis failed on “testConnectToRunningContext”

2019-08-28 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918304#comment-16918304
 ] 

runzhiwang commented on LIVY-655:
-

I'm working on it.

> Travis failed on “testConnectToRunningContext”
> --
>
> Key: LIVY-655
> URL: https://issues.apache.org/jira/browse/LIVY-655
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 70.658 sec 
> <<< FAILURE! - in org.apache.livy.rsc.TestSparkClientTests run: 19, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 70.658 sec <<< FAILURE! - in 
> org.apache.livy.rsc.TestSparkClienttestConnectToRunningContext(org.apache.livy.rsc.TestSparkClient)
>   Time elapsed: 17.455 sec  <<< ERROR!java.lang.RuntimeException: 
> java.util.concurrent.TimeoutException at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:590) at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575) at 
> org.apache.livy.rsc.TestSparkClient.testConnectToRunningContext(TestSparkClient.java:338)Caused
>  by: java.util.concurrent.TimeoutException at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:590) at 
> org.apache.livy.rsc.TestSparkClient.runTest(TestSparkClient.java:575) at 
> org.apache.livy.rsc.TestSparkClient.testConnectToRunningContext(TestSparkClient.java:338)
>  
> reference to [https://travis-ci.org/runzhiwang/incubator-livy/jobs/578129098]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (LIVY-636) Unable to create interactive session with additional JAR in spark.driver.extraClassPath

2019-08-29 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918305#comment-16918305
 ] 

runzhiwang commented on LIVY-636:
-

Hi [~ishitavirmani]

Do you have any updates ?

> Unable to create interactive session with additional JAR in 
> spark.driver.extraClassPath
> ---
>
> Key: LIVY-636
> URL: https://issues.apache.org/jira/browse/LIVY-636
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Ishita Virmani
>Priority: Major
> Attachments: applicationmaster.log, container.log, stacktrace.txt, 
> test.png
>
>
> Command Run: c{{url -H "Content-Type: application/json" -X POST -d 
> '\{"kind":"pyspark","conf":{"spark.driver.extraClassPath":"/data/XXX-0.0.1-SNAPSHOT.jar"}}'
>  -i http:///session}}
> {{The above command fails to create a Spark Session on YARN with Null pointer 
> exception. Stack trace for the same has been attached along-with.}}
> The JAR file here is present on local driver Path. Also tried using HDFS path 
> in the following manner 
> {{hdfs://:/data/XXX-0.0.1-SNAPSHOT.jar}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (LIVY-659) Travis failed on "can kill spark-submit while it's running"

2019-09-04 Thread runzhiwang (Jira)
runzhiwang created LIVY-659:
---

 Summary: Travis failed on "can kill spark-submit while it's 
running"
 Key: LIVY-659
 URL: https://issues.apache.org/jira/browse/LIVY-659
 Project: Livy
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.6.0
Reporter: runzhiwang


* can kill spark-submit while it's running *** FAILED *** (41 milliseconds)
 org.mockito.exceptions.verification.WantedButNotInvoked: Wanted but not 
invoked:
lineBufferedProcess.destroy();
-> at 
org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13$$anonfun$apply$mcV$sp$15$$anonfun$apply$mcV$sp$16.apply$mcV$sp(SparkYarnAppSpec.scala:226)
Actually, there were zero interactions with this mock.
 at 
org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13$$anonfun$apply$mcV$sp$15$$anonfun$apply$mcV$sp$16.apply$mcV$sp(SparkYarnAppSpec.scala:226)
 at 
org.apache.livy.utils.SparkYarnAppSpec.org$apache$livy$utils$SparkYarnAppSpec$$cleanupThread(SparkYarnAppSpec.scala:43)
 at 
org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13$$anonfun$apply$mcV$sp$15.apply$mcV$sp(SparkYarnAppSpec.scala:224)
 at org.apache.livy.utils.Clock$.withSleepMethod(Clock.scala:31)
 at 
org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13.apply$mcV$sp(SparkYarnAppSpec.scala:201)
 at 
org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13.apply(SparkYarnAppSpec.scala:201)
 at 
org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13.apply(SparkYarnAppSpec.scala:201)
 at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)

please reference to: 
https://travis-ci.org/captainzmc/incubator-livy/jobs/580596561



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (LIVY-659) Travis failed on "can kill spark-submit while it's running"

2019-09-04 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922336#comment-16922336
 ] 

runzhiwang commented on LIVY-659:
-

I‘m working on it.

> Travis failed on "can kill spark-submit while it's running"
> ---
>
> Key: LIVY-659
> URL: https://issues.apache.org/jira/browse/LIVY-659
> Project: Livy
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> * can kill spark-submit while it's running *** FAILED *** (41 milliseconds)
>  org.mockito.exceptions.verification.WantedButNotInvoked: Wanted but not 
> invoked:
> lineBufferedProcess.destroy();
> -> at 
> org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13$$anonfun$apply$mcV$sp$15$$anonfun$apply$mcV$sp$16.apply$mcV$sp(SparkYarnAppSpec.scala:226)
> Actually, there were zero interactions with this mock.
>  at 
> org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13$$anonfun$apply$mcV$sp$15$$anonfun$apply$mcV$sp$16.apply$mcV$sp(SparkYarnAppSpec.scala:226)
>  at 
> org.apache.livy.utils.SparkYarnAppSpec.org$apache$livy$utils$SparkYarnAppSpec$$cleanupThread(SparkYarnAppSpec.scala:43)
>  at 
> org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13$$anonfun$apply$mcV$sp$15.apply$mcV$sp(SparkYarnAppSpec.scala:224)
>  at org.apache.livy.utils.Clock$.withSleepMethod(Clock.scala:31)
>  at 
> org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13.apply$mcV$sp(SparkYarnAppSpec.scala:201)
>  at 
> org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13.apply(SparkYarnAppSpec.scala:201)
>  at 
> org.apache.livy.utils.SparkYarnAppSpec$$anonfun$1$$anonfun$apply$mcV$sp$13.apply(SparkYarnAppSpec.scala:201)
>  at 
> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
> please reference to: 
> https://travis-ci.org/captainzmc/incubator-livy/jobs/580596561



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (LIVY-665) Travis failed on "should not honor impersonation requests"

2019-09-09 Thread runzhiwang (Jira)
runzhiwang created LIVY-665:
---

 Summary: Travis failed on "should not honor impersonation requests"
 Key: LIVY-665
 URL: https://issues.apache.org/jira/browse/LIVY-665
 Project: Livy
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.6.0
Reporter: runzhiwang


should not honor impersonation requests *** FAILED *** (1 minute, 11 seconds)
 The code passed to eventually never returned normally. Attempted 544 times 
over 1.00074978245 minutes. Last failure message: "[starting]" was not equal to 
"[idle]". (JobApiSpec.scala:173)

 

please reference to: 
https://travis-ci.org/runzhiwang/incubator-livy/jobs/582531646



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Issue Comment Deleted] (LIVY-661) POST /sessions API - Conf parameters

2019-09-17 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-661:

Comment: was deleted

(was: I'm working on it)

> POST /sessions API - Conf parameters
> 
>
> Key: LIVY-661
> URL: https://issues.apache.org/jira/browse/LIVY-661
> Project: Livy
>  Issue Type: Bug
>  Components: API
>Reporter: Arun Sethia
>Priority: Critical
>
> The Livy POST /sessions API  allows to pass conf (Spark configuration 
> properties). When we pass spark.driver.extraJavaOptions as conf, It overrides 
> the cluster default spark.driver.extraJavaOptions, Ideally it should append 
> with default conf like it is done for other conf values.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-667) Support query a lot of data.

2019-09-18 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: Support query a lot of data.  (was: Collect a part of partition to 
the driver by batch to avoid OOM)

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-18 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930215#comment-16930215
 ] 

runzhiwang edited comment on LIVY-667 at 9/18/19 8:21 AM:
--

There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch. 
Cons: a little complicated to implement.


was (Author: runzhiwang):
There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch. 
Cons: a little complicated to implement.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-18 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930215#comment-16930215
 ] 

runzhiwang edited comment on LIVY-667 at 9/18/19 8:17 AM:
--

There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch. 
Cons: a little complicated to implement.


was (Author: runzhiwang):
There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at one time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
3.Cache rdd, and read each partition by batch. Cons: the shortage of memory and 
disk will cause the recompute of rdd, which maybe time-consuming
4.Save rdd to hdfs without repartition. and read each partition by batch. Cons: 
a little complicated to implement.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-18 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930215#comment-16930215
 ] 

runzhiwang edited comment on LIVY-667 at 9/18/19 8:16 AM:
--

There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at one time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
3.Cache rdd, and read each partition by batch. Cons: the shortage of memory and 
disk will cause the recompute of rdd, which maybe time-consuming
4.Save rdd to hdfs without repartition. and read each partition by batch. Cons: 
a little complicated to implement.


was (Author: runzhiwang):
I'm working on it

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-18 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930215#comment-16930215
 ] 

runzhiwang edited comment on LIVY-667 at 9/18/19 12:31 PM:
---

There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch. 


was (Author: runzhiwang):
There are several design to support query a lot of data.

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch. 
Cons: a little complicated to implement.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-667) Collecting a part of partition to the driver by batch

2019-09-15 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: Collecting a part of partition to the driver by batch  (was: 
support query partition by batch)

> Collecting a part of partition to the driver by batch
> -
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-667) Collect a part of partition to the driver by batch to avoid OOM

2019-09-15 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: Collect a part of partition to the driver by batch to avoid OOM  
(was: Collecting a part of partition to the driver by batch to avoid OOM)

> Collect a part of partition to the driver by batch to avoid OOM
> ---
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-667) support query big partition

2019-09-15 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: support query big partition  (was: Don't support query big 
partition)

> support query big partition
> ---
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-667) support query big partition by batch

2019-09-15 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: support query big partition by batch  (was: support query big 
partition)

> support query big partition by batch
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-667) support query partition by batch

2019-09-15 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: support query partition by batch  (was: support query big 
partition by batch)

> support query partition by batch
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (LIVY-667) Collecting a part of partition to the driver by batch to avoid OOM

2019-09-15 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-667:

Summary: Collecting a part of partition to the driver by batch to avoid OOM 
 (was: Collecting a part of partition to the driver by batch)

> Collecting a part of partition to the driver by batch to avoid OOM
> --
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (LIVY-667) Don't support query big partition

2019-09-15 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930215#comment-16930215
 ] 

runzhiwang commented on LIVY-667:
-

I'm working on it

> Don't support query big partition
> -
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (LIVY-667) Don't support query big partition

2019-09-15 Thread runzhiwang (Jira)
runzhiwang created LIVY-667:
---

 Summary: Don't support query big partition
 Key: LIVY-667
 URL: https://issues.apache.org/jira/browse/LIVY-667
 Project: Livy
  Issue Type: Bug
  Components: Thriftserver
Affects Versions: 0.6.0
Reporter: runzhiwang


When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
to load one partition at each time instead of the whole rdd to avoid 
OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
still occurs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (LIVY-661) POST /sessions API - Conf parameters

2019-09-15 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930210#comment-16930210
 ] 

runzhiwang commented on LIVY-661:
-

I'm working on it

> POST /sessions API - Conf parameters
> 
>
> Key: LIVY-661
> URL: https://issues.apache.org/jira/browse/LIVY-661
> Project: Livy
>  Issue Type: Bug
>  Components: API
>Reporter: Arun Sethia
>Priority: Critical
>
> The Livy POST /sessions API  allows to pass conf (Spark configuration 
> properties). When we pass spark.driver.extraJavaOptions as conf, It overrides 
> the cluster default spark.driver.extraJavaOptions, Ideally it should append 
> with default conf like it is done for other conf values.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-19 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930215#comment-16930215
 ] 

runzhiwang edited comment on LIVY-667 at 9/20/19 3:13 AM:
--

hi,[~jerryshao], [~mgaido]. There are four design to support query a lot of 
data.  What's your opinion?

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch, the 
code of "read each partition by batch" is just like the PR.


was (Author: runzhiwang):
hi,[~jerryshao], [~mgaido]. There are several design to support query a lot of 
data.  What's your opinion?

1.Merge the result rdd to one partition, and save as a single file in hdfs. And 
livy reads the file line by line directly.  Cons: it's slow to read line by 
line.
 2.Repartition each partition into fixed size, and save in hdfs. And livy reads 
by toLocalIterator which read one partition into memory at each time. Cons: 
there are a lot of files in hdfs if the size of each partition is too small.
 3.Cache rdd, and read each partition by batch. Cons: the shortage of memory 
and disk will cause the recompute of rdd, which maybe time-consuming
 4.Save rdd to hdfs without repartition. and read each partition by batch, the 
code of "read each partition by batch" is just like the PR.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-721) Distributed Session ID Generation

2019-12-02 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16986605#comment-16986605
 ] 

runzhiwang commented on LIVY-721:
-

I am working on it.

> Distributed Session ID Generation
> -
>
> Key: LIVY-721
> URL: https://issues.apache.org/jira/browse/LIVY-721
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-722) Session Allocation with Consistent Hashing

2019-12-02 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16986606#comment-16986606
 ] 

runzhiwang commented on LIVY-722:
-

I'm working on it

> Session Allocation with Consistent Hashing
> --
>
> Key: LIVY-722
> URL: https://issues.apache.org/jira/browse/LIVY-722
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-727) Session state always be idle though the yarn application has been killed after restart livy.

2019-12-06 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990343#comment-16990343
 ] 

runzhiwang commented on LIVY-727:
-

I‘m working on it.

> Session state always be idle though the yarn application has been killed 
> after  restart livy.
> -
>
> Key: LIVY-727
> URL: https://issues.apache.org/jira/browse/LIVY-727
> Project: Livy
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-727) Session state always be idle though the yarn application has been killed after restart livy.

2019-12-06 Thread runzhiwang (Jira)
runzhiwang created LIVY-727:
---

 Summary: Session state always be idle though the yarn application 
has been killed after  restart livy.
 Key: LIVY-727
 URL: https://issues.apache.org/jira/browse/LIVY-727
 Project: Livy
  Issue Type: Bug
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-727) Session state always be idle though the yarn application has been killed after restart livy.

2019-12-06 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-727:

Description: 
# Set livy.server.recovery.mode=recovery, and create a session in yarn-cluster
 # Restart livy,then kill the yarn application.
 # The session state will always be idle and never change to killed or dead.

> Session state always be idle though the yarn application has been killed 
> after  restart livy.
> -
>
> Key: LIVY-727
> URL: https://issues.apache.org/jira/browse/LIVY-727
> Project: Livy
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
>
> # Set livy.server.recovery.mode=recovery, and create a session in yarn-cluster
>  # Restart livy,then kill the yarn application.
>  # The session state will always be idle and never change to killed or dead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-729) Livy should not recover the killed session

2019-12-17 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-729:

Attachment: image-2019-12-18-08-56-46-925.png

> Livy should not recover the killed session
> --
>
> Key: LIVY-729
> URL: https://issues.apache.org/jira/browse/LIVY-729
> Project: Livy
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-18-08-56-46-925.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-729) Livy should not recover the killed session

2019-12-17 Thread runzhiwang (Jira)
runzhiwang created LIVY-729:
---

 Summary: Livy should not recover the killed session
 Key: LIVY-729
 URL: https://issues.apache.org/jira/browse/LIVY-729
 Project: Livy
  Issue Type: Bug
  Components: Server
Affects Versions: 0.6.0
Reporter: runzhiwang
 Attachments: image-2019-12-18-08-56-46-925.png





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-729) Livy should not recover the killed session

2019-12-17 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-729:

Description: 
How to reproduce the problem:

1.

!image-2019-12-18-08-56-46-925.png!

> Livy should not recover the killed session
> --
>
> Key: LIVY-729
> URL: https://issues.apache.org/jira/browse/LIVY-729
> Project: Livy
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-18-08-56-46-925.png
>
>
> How to reproduce the problem:
> 1.
> !image-2019-12-18-08-56-46-925.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-729) Livy should not recover the killed session

2019-12-17 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16998710#comment-16998710
 ] 

runzhiwang commented on LIVY-729:
-

working on it

> Livy should not recover the killed session
> --
>
> Key: LIVY-729
> URL: https://issues.apache.org/jira/browse/LIVY-729
> Project: Livy
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-18-08-56-46-925.png
>
>
> Follows are steps to reproduce the problem:
>  # Set livy.server.recovery.mode=recovery, and create a session: session0 in 
> yarn-cluster
>  # kill the yarn application of the session
>  # restart livy
>  # livy try to recover session0, but application has been killed and driver 
> does not exist, so client can not connect to driver, and exception was thrown 
> as the image.
>  # If the ip:port of the driver was reused by session1, client of session0 
> will try to connect to driver of session1, then driver will throw exception: 
> Unexpected client ID.
>  # Both the exception will confused the user, and recover a lot of killed 
> sessions will delay the recover of alive session.
> !image-2019-12-18-08-56-46-925.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-729) Livy should not recover the killed session

2019-12-17 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-729:

Description: 
Follows are steps to reproduce the problem:
 # Set livy.server.recovery.mode=recovery, and create a session: session0 in 
yarn-cluster
 # kill the yarn application of the session
 # restart livy
 # livy try to recover session0, but application has been killed and driver 
does not exist, so client can not connect to driver, and exception was thrown 
as the image.
 # If the ip:port of the driver was reused by session1, client of session0 will 
try to connect to driver of session1, then driver will throw exception: 
Unexpected client ID.
 # Both the exception will confused the user, and recover a lot of killed 
sessions will delay the recover of alive session.

!image-2019-12-18-08-56-46-925.png!

  was:
How to reproduce the problem:

1.

!image-2019-12-18-08-56-46-925.png!


> Livy should not recover the killed session
> --
>
> Key: LIVY-729
> URL: https://issues.apache.org/jira/browse/LIVY-729
> Project: Livy
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-18-08-56-46-925.png
>
>
> Follows are steps to reproduce the problem:
>  # Set livy.server.recovery.mode=recovery, and create a session: session0 in 
> yarn-cluster
>  # kill the yarn application of the session
>  # restart livy
>  # livy try to recover session0, but application has been killed and driver 
> does not exist, so client can not connect to driver, and exception was thrown 
> as the image.
>  # If the ip:port of the driver was reused by session1, client of session0 
> will try to connect to driver of session1, then driver will throw exception: 
> Unexpected client ID.
>  # Both the exception will confused the user, and recover a lot of killed 
> sessions will delay the recover of alive session.
> !image-2019-12-18-08-56-46-925.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-729) Livy should not recover the killed session

2019-12-17 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-729:

Description: 
Follows are steps to reproduce the problem:
 # Set livy.server.recovery.mode=recovery, and create a session: session0 in 
yarn-cluster
 # kill the yarn application of the session
 # restart livy
 # livy try to recover session0, but application has been killed and driver 
does not exist, so client can not connect to driver, and exception was thrown 
as the image.
 # If the ip:port of the driver was reused by session1, client of session0 will 
try to connect to driver of session1, then driver will throw exception: 
Unexpected client ID.
 # Both the exception threw by livy and driver will confused the user, and 
recover a lot of killed sessions will delay the recover of alive session.

!image-2019-12-18-08-56-46-925.png!

  was:
Follows are steps to reproduce the problem:
 # Set livy.server.recovery.mode=recovery, and create a session: session0 in 
yarn-cluster
 # kill the yarn application of the session
 # restart livy
 # livy try to recover session0, but application has been killed and driver 
does not exist, so client can not connect to driver, and exception was thrown 
as the image.
 # If the ip:port of the driver was reused by session1, client of session0 will 
try to connect to driver of session1, then driver will throw exception: 
Unexpected client ID.
 # Both the exception will confused the user, and recover a lot of killed 
sessions will delay the recover of alive session.

!image-2019-12-18-08-56-46-925.png!


> Livy should not recover the killed session
> --
>
> Key: LIVY-729
> URL: https://issues.apache.org/jira/browse/LIVY-729
> Project: Livy
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-18-08-56-46-925.png
>
>
> Follows are steps to reproduce the problem:
>  # Set livy.server.recovery.mode=recovery, and create a session: session0 in 
> yarn-cluster
>  # kill the yarn application of the session
>  # restart livy
>  # livy try to recover session0, but application has been killed and driver 
> does not exist, so client can not connect to driver, and exception was thrown 
> as the image.
>  # If the ip:port of the driver was reused by session1, client of session0 
> will try to connect to driver of session1, then driver will throw exception: 
> Unexpected client ID.
>  # Both the exception threw by livy and driver will confused the user, and 
> recover a lot of killed sessions will delay the recover of alive session.
> !image-2019-12-18-08-56-46-925.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-727) Session state always be idle though the yarn application has been killed after restart livy.

2019-12-07 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-727:

Description: 
# Set livy.server.recovery.mode=recovery, and create a session in yarn-cluster
 # Restart livy,then kill the yarn application of the session.
 # The session state will always be idle and never change to killed or dead.

  was:
# Set livy.server.recovery.mode=recovery, and create a session in yarn-cluster
 # Restart livy,then kill the yarn application.
 # The session state will always be idle and never change to killed or dead.


> Session state always be idle though the yarn application has been killed 
> after  restart livy.
> -
>
> Key: LIVY-727
> URL: https://issues.apache.org/jira/browse/LIVY-727
> Project: Livy
>  Issue Type: Bug
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Set livy.server.recovery.mode=recovery, and create a session in yarn-cluster
>  # Restart livy,then kill the yarn application of the session.
>  # The session state will always be idle and never change to killed or dead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-720) NoSuchElementException caused when reading from hdfs submitted via livy programmatic api

2019-12-03 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987467#comment-16987467
 ] 

runzhiwang commented on LIVY-720:
-

I will work on it recently. 

> NoSuchElementException caused when reading from hdfs  submitted via livy 
> programmatic api
> -
>
> Key: LIVY-720
> URL: https://issues.apache.org/jira/browse/LIVY-720
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
> Environment: Using a docker container on windows 10: 
> https://hub.docker.com/r/cheathwood/hadoop-spark-livy
>Reporter: Stephen Jenkins
>Priority: Blocker
>
> Hi,
>  
> I've been using the Livy programmatic api to submit spark jobs written in 
> scala and I've ran into a strange issue. I'm using case classes to wrap the 
> parameters I want to send over to spark, then within the job I manipulate 
> them to be used for different parts of the job. However, it seems whenever I 
> try read and collect data from hdfs I get the following error:
> {code:java}
> java.util.NoSuchElementException: head of empty list
>   at scala.collection.immutable.Nil$.head(List.scala:420)
>   at scala.collection.immutable.Nil$.head(List.scala:417)
>   at scala.collection.immutable.List.map(List.scala:277)
>   at 
> scala.reflect.internal.Symbols$Symbol.parentSymbols(Symbols.scala:2117)
>   at 
> scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:301)
>   at 
> scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:341)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply$mcV$sp(SymbolLoaders.scala:74)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply(SymbolLoaders.scala:71)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply(SymbolLoaders.scala:71)
>   at 
> scala.reflect.internal.SymbolTable.slowButSafeEnteringPhaseNotLaterThan(SymbolTable.scala:263)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType.complete(SymbolLoaders.scala:71)
>   at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1514)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.scala$reflect$runtime$SynchronizedSymbols$SynchronizedSymbol$$super$info(SynchronizedSymbols.scala:174)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
>   at scala.reflect.runtime.Gil$class.gilSynchronized(Gil.scala:19)
>   at 
> scala.reflect.runtime.JavaUniverse.gilSynchronized(JavaUniverse.scala:16)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:123)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:174)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.info(SynchronizedSymbols.scala:127)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.info(SynchronizedSymbols.scala:174)
>   at scala.reflect.internal.Types$TypeRef.thisInfo(Types.scala:2194)
>   at scala.reflect.internal.Types$TypeRef.baseClasses(Types.scala:2199)
>   at 
> scala.reflect.internal.tpe.FindMembers$FindMemberBase.(FindMembers.scala:17)
>   at 
> scala.reflect.internal.tpe.FindMembers$FindMember.(FindMembers.scala:219)
>   at 
> scala.reflect.internal.Types$Type.scala$reflect$internal$Types$Type$$findMemberInternal$1(Types.scala:1014)
>   at scala.reflect.internal.Types$Type.findMember(Types.scala:1016)
>   at scala.reflect.internal.Types$Type.memberBasedOnName(Types.scala:631)
>   at scala.reflect.internal.Types$Type.member(Types.scala:600)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.staticPackage(Mirrors.scala:204)
>   at 
> scala.reflect.runtime.JavaMirrors$JavaMirror.staticPackage(JavaMirrors.scala:82)
>   at scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:263)
>   at 
> scala.reflect.runtime.JavaMirrors$class.scala$reflect$runtime$JavaMirrors$$createMirror(JavaMirrors.scala:32)
>   at 
> scala.reflect.runtime.JavaMirrors$$anonfun$runtimeMirror$1.apply(JavaMirrors.scala:49)
>   at 
> scala.reflect.runtime.JavaMirrors$$anonfun$runtimeMirror$1.apply(JavaMirrors.scala:47)
>   at 

[jira] [Updated] (LIVY-699) [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, timestamp and date

2019-10-25 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-699:

Summary: [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, 
timestamp and date  (was: Support DateType: decimal, timestamp, date)

> [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, timestamp and 
> date
> --
>
> Key: LIVY-699
> URL: https://issues.apache.org/jira/browse/LIVY-699
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-699) [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, timestamp and date

2019-10-25 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-699:

Description: 
Follows are steps to reproduce the problem:
 # create table test(id decimal).
 # resultSet.getMetaData().getColumnTypeName(1) will return string rather than 
decimal, which maybe mislead user and cause error.

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.

> [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, timestamp and 
> date
> --
>
> Key: LIVY-699
> URL: https://issues.apache.org/jira/browse/LIVY-699
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Follows are steps to reproduce the problem:
>  # create table test(id decimal).
>  # resultSet.getMetaData().getColumnTypeName(1) will return string rather 
> than decimal, which maybe mislead user and cause error.
> Additionally, SparkThrift return decimal instead of string in the same case, 
> so it is necessary to return decimal instead of string in livy. The same to 
> timestamp and date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-699) [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, timestamp and date

2019-10-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-699:

Description: 
Follows are steps to reproduce the problem:
 # {{create table test(id decimal)}}.
 # {{resultSet.getMetaData().getColumnTypeName(1)}} will return string rather 
than decimal.
 # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
Illegal conversion}}

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.

  was:
Follows are steps to reproduce the problem:
 # create table test(id decimal).
 # resultSet.getMetaData().getColumnTypeName(1) will return string rather than 
decimal, which maybe mislead user and cause error.

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.


> [LIVY-699][Thrift] Fix getColumnTypeName cannot return decimal, timestamp and 
> date
> --
>
> Key: LIVY-699
> URL: https://issues.apache.org/jira/browse/LIVY-699
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Follows are steps to reproduce the problem:
>  # {{create table test(id decimal)}}.
>  # {{resultSet.getMetaData().getColumnTypeName(1)}} will return string rather 
> than decimal.
>  # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
> Illegal conversion}}
> Additionally, SparkThrift return decimal instead of string in the same case, 
> so it is necessary to return decimal instead of string in livy. The same to 
> timestamp and date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-699) Support DateType: decimal, timestamp, date

2019-10-21 Thread runzhiwang (Jira)
runzhiwang created LIVY-699:
---

 Summary: Support DateType: decimal, timestamp, date
 Key: LIVY-699
 URL: https://issues.apache.org/jira/browse/LIVY-699
 Project: Livy
  Issue Type: Bug
Affects Versions: 0.6.0
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-699) [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: Illegal conversion

2019-10-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-699:

Summary: [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw 
java.sql.SQLException: Illegal conversion  (was: [LIVY-699][Thrift] Fix 
getColumnTypeName cannot return decimal, timestamp and date)

> [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
> Illegal conversion
> --
>
> Key: LIVY-699
> URL: https://issues.apache.org/jira/browse/LIVY-699
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Follows are steps to reproduce the problem:
>  # {{create table test(id decimal)}}.
>  # {{resultSet.getMetaData().getColumnTypeName(1)}} will return string rather 
> than decimal.
>  # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
> Illegal conversion}}
> Additionally, SparkThrift return decimal instead of string in the same case, 
> so it is necessary to return decimal instead of string in livy. The same to 
> timestamp and date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-699) [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: Illegal conversion

2019-10-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-699:

Description: 
LIVY-699[THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
Illegal conversion.

Follows are steps to reproduce the problem:
 # {{create table test(id decimal)}}.
 # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
Illegal conversion}}. The reason is 
{{getSchema().getColumnDescriptorAt(columnIndex - 1).getType();}} at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L415]
 return string, so cannot pass the check {{val instanceof BigDecimal at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L133],
 so throw java.sql.SQLException: Illegal conversion}} at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L137]

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.

  was:
[LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
Illegal conversion.

Follows are steps to reproduce the problem:
 # {{create table test(id decimal)}}.
 # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
Illegal conversion}}. The reason is 
{{getSchema().getColumnDescriptorAt(columnIndex - 1).getType();}} at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L415]
 return string, so cannot pass the check {{val instanceof BigDecimal }}at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L133],
 so throw {{java.sql.SQLException: Illegal conversion}} at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L137]

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.


> [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
> Illegal conversion
> --
>
> Key: LIVY-699
> URL: https://issues.apache.org/jira/browse/LIVY-699
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> LIVY-699[THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
> Illegal conversion.
> Follows are steps to reproduce the problem:
>  # {{create table test(id decimal)}}.
>  # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
> Illegal conversion}}. The reason is 
> {{getSchema().getColumnDescriptorAt(columnIndex - 1).getType();}} at 
> [https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L415]
>  return string, so cannot pass the check {{val instanceof BigDecimal at 
> [https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L133],
>  so throw java.sql.SQLException: Illegal conversion}} at 
> [https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L137]
> Additionally, SparkThrift return decimal instead of string in the same case, 
> so it is necessary to return decimal instead of string in livy. The same to 
> timestamp and date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-699) [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: Illegal conversion

2019-10-28 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-699:

Description: 
[LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
Illegal conversion.

Follows are steps to reproduce the problem:
 # {{create table test(id decimal)}}.
 # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
Illegal conversion}}. The reason is 
{{getSchema().getColumnDescriptorAt(columnIndex - 1).getType();}} at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L415]
 return string, so cannot pass the check {{val instanceof BigDecimal }}at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L133],
 so throw {{java.sql.SQLException: Illegal conversion}} at 
[https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L137]

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.

  was:
Follows are steps to reproduce the problem:
 # {{create table test(id decimal)}}.
 # {{resultSet.getMetaData().getColumnTypeName(1)}} will return string rather 
than decimal.
 # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
Illegal conversion}}

Additionally, SparkThrift return decimal instead of string in the same case, so 
it is necessary to return decimal instead of string in livy. The same to 
timestamp and date.


> [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
> Illegal conversion
> --
>
> Key: LIVY-699
> URL: https://issues.apache.org/jira/browse/LIVY-699
> Project: Livy
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [LIVY-699][THRIFT] Fix resultSet.getBigDecimal throw java.sql.SQLException: 
> Illegal conversion.
> Follows are steps to reproduce the problem:
>  # {{create table test(id decimal)}}.
>  # Then {{resultSet.getBigDecimal(1)}} will throw:{{ java.sql.SQLException: 
> Illegal conversion}}. The reason is 
> {{getSchema().getColumnDescriptorAt(columnIndex - 1).getType();}} at 
> [https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L415]
>  return string, so cannot pass the check {{val instanceof BigDecimal }}at 
> [https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L133],
>  so throw {{java.sql.SQLException: Illegal conversion}} at 
> [https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java#L137]
> Additionally, SparkThrift return decimal instead of string in the same case, 
> so it is necessary to return decimal instead of string in livy. The same to 
> timestamp and date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-714:

Description: (was: 
!file:///C:/Users/RUNZHI~1/AppData/Local/Temp/%E4%BC%81%E4%B8%9A%E5%BE%AE%E4%BF%A1%E6%88%AA%E5%9B%BE_157421453822.png!)

> Cannot remove the app in leakedAppTags when timeout
> ---
>
> Key: LIVY-714
> URL: https://issues.apache.org/jira/browse/LIVY-714
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: bug.png, image-2019-11-20-09-50-52-316.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-714:

Attachment: image-2019-11-20-09-50-52-316.png

> Cannot remove the app in leakedAppTags when timeout
> ---
>
> Key: LIVY-714
> URL: https://issues.apache.org/jira/browse/LIVY-714
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: bug.png, image-2019-11-20-09-50-52-316.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-714:

Description: !image-2019-11-20-09-50-52-316.png!

> Cannot remove the app in leakedAppTags when timeout
> ---
>
> Key: LIVY-714
> URL: https://issues.apache.org/jira/browse/LIVY-714
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: bug.png, image-2019-11-20-09-50-52-316.png
>
>
> !image-2019-11-20-09-50-52-316.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-714:

Attachment: bug.png

> Cannot remove the app in leakedAppTags when timeout
> ---
>
> Key: LIVY-714
> URL: https://issues.apache.org/jira/browse/LIVY-714
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: bug.png, image-2019-11-20-09-50-52-316.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)
runzhiwang created LIVY-714:
---

 Summary: Cannot remove the app in leakedAppTags when timeout
 Key: LIVY-714
 URL: https://issues.apache.org/jira/browse/LIVY-714
 Project: Livy
  Issue Type: New Feature
Reporter: runzhiwang


!file:///C:/Users/RUNZHI~1/AppData/Local/Temp/%E4%BC%81%E4%B8%9A%E5%BE%AE%E4%BF%A1%E6%88%AA%E5%9B%BE_157421453822.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-714:

Description: 
# var isRemoved = false should be in while(iter.hasNext), otherwise if there 
are two apps, the first app will be removed and the second app will timeout in 
this loop, and after remove the first app, isRemoved = true, and the second app 
cannot pass the if(!isRemoved) and only will be delete in the next loop.
 # entry.getValue - now is negative, and never greater than 
sessionLeakageCheckTimeout.

!image-2019-11-20-09-50-52-316.png!

  was:!image-2019-11-20-09-50-52-316.png!


> Cannot remove the app in leakedAppTags when timeout
> ---
>
> Key: LIVY-714
> URL: https://issues.apache.org/jira/browse/LIVY-714
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-11-20-09-50-52-316.png
>
>
> # var isRemoved = false should be in while(iter.hasNext), otherwise if there 
> are two apps, the first app will be removed and the second app will timeout 
> in this loop, and after remove the first app, isRemoved = true, and the 
> second app cannot pass the if(!isRemoved) and only will be delete in the next 
> loop.
>  # entry.getValue - now is negative, and never greater than 
> sessionLeakageCheckTimeout.
> !image-2019-11-20-09-50-52-316.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-714) Cannot remove the app in leakedAppTags when timeout

2019-11-19 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16977987#comment-16977987
 ] 

runzhiwang commented on LIVY-714:
-

I'm working on it.

> Cannot remove the app in leakedAppTags when timeout
> ---
>
> Key: LIVY-714
> URL: https://issues.apache.org/jira/browse/LIVY-714
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-11-20-09-50-52-316.png
>
>
> # var isRemoved = false should be in while(iter.hasNext), otherwise if there 
> are two apps, the first app will be removed and the second app will timeout 
> in this loop, and after remove the first app, isRemoved = true, and the 
> second app cannot pass the if(!isRemoved) and only will be delete in the next 
> loop.
>  # entry.getValue - now is negative, and never greater than 
> sessionLeakageCheckTimeout.
> !image-2019-11-20-09-50-52-316.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-711) jdk8 cause travis failed

2019-11-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-711:

Summary: jdk8 cause travis failed   (was: travis failed)

> jdk8 cause travis failed 
> -
>
> Key: LIVY-711
> URL: https://issues.apache.org/jira/browse/LIVY-711
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-11-12-10-16-27-108.png
>
>
> !image-2019-11-12-10-16-27-108.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-711) travis failed

2019-11-11 Thread runzhiwang (Jira)
runzhiwang created LIVY-711:
---

 Summary: travis failed
 Key: LIVY-711
 URL: https://issues.apache.org/jira/browse/LIVY-711
 Project: Livy
  Issue Type: New Feature
Reporter: runzhiwang
 Attachments: image-2019-11-12-10-16-27-108.png

!image-2019-11-12-10-16-27-108.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-711) Travis fails to build on Ubuntu16.04

2019-11-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-711:

Attachment: image-2019-11-12-14-25-37-189.png

> Travis fails to build on Ubuntu16.04
> 
>
> Key: LIVY-711
> URL: https://issues.apache.org/jira/browse/LIVY-711
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-11-12-10-16-27-108.png, 
> image-2019-11-12-14-25-37-189.png
>
>
> !image-2019-11-12-10-16-27-108.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-711) Travis fails to build on Ubuntu16.04

2019-11-11 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-711:

Summary: Travis fails to build on Ubuntu16.04  (was: Fails to build on 
Travis CI (Xenial))

> Travis fails to build on Ubuntu16.04
> 
>
> Key: LIVY-711
> URL: https://issues.apache.org/jira/browse/LIVY-711
> Project: Livy
>  Issue Type: New Feature
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-11-12-10-16-27-108.png
>
>
> !image-2019-11-12-10-16-27-108.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-23 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935497#comment-16935497
 ] 

runzhiwang edited comment on LIVY-667 at 9/23/19 6:26 AM:
--

[~mgaido] Because of the limited memory and huge size of user data, increasing 
memory cannot solve this problem. Also, a lot of drivers with large memory will 
exhaust the cluster memory very soon.


was (Author: runzhiwang):
[~mgaido] Because of the limited memory and huge size of user data, increasing 
memory cannot solve this problem. Otherwise, a lot of drivers with large memory 
will exhaust the cluster memory very soon.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (LIVY-667) Support query a lot of data.

2019-09-22 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935497#comment-16935497
 ] 

runzhiwang edited comment on LIVY-667 at 9/23/19 1:21 AM:
--

[~mgaido] Because of the limited memory and huge size of user data, increasing 
memory cannot solve this problem. Otherwise, a lot of drivers with large memory 
will exhaust the cluster memory very soon.


was (Author: runzhiwang):
[~mgaido] Because of the limited memory and huge size of user data, increasing 
memory cannot solve this problem. Otherwise, a lot of drivers with large memory 
will exist the cluster memory very soon.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-667) Support query a lot of data.

2019-09-22 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935497#comment-16935497
 ] 

runzhiwang commented on LIVY-667:
-

[~mgaido] Because of the limited memory and huge size of user data, increasing 
memory cannot solve this problem. Otherwise, a lot of drivers with large memory 
will exist the cluster memory very soon.

> Support query a lot of data.
> 
>
> Key: LIVY-667
> URL: https://issues.apache.org/jira/browse/LIVY-667
> Project: Livy
>  Issue Type: Bug
>  Components: Thriftserver
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When enable livy.server.thrift.incrementalCollect, thrift use toLocalIterator 
> to load one partition at each time instead of the whole rdd to avoid 
> OutOfMemory. However, if the largest partition is too big, the OutOfMemory 
> still occurs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-697) rsc client cannot resolve the hostname of driver in yarn-cluster mode

2019-10-12 Thread runzhiwang (Jira)
runzhiwang created LIVY-697:
---

 Summary: rsc client cannot resolve the hostname of driver in 
yarn-cluster mode
 Key: LIVY-697
 URL: https://issues.apache.org/jira/browse/LIVY-697
 Project: Livy
  Issue Type: Bug
  Components: RSC
Affects Versions: 0.6.0
Reporter: runzhiwang
 Attachments: image-2019-10-13-12-44-41-861.png

!image-2019-10-13-12-44-41-861.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-697) Rsc client cannot resolve the hostname of driver in yarn-cluster mode

2019-10-12 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950215#comment-16950215
 ] 

runzhiwang commented on LIVY-697:
-

working on it

> Rsc client cannot resolve the hostname of driver in yarn-cluster mode
> -
>
> Key: LIVY-697
> URL: https://issues.apache.org/jira/browse/LIVY-697
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-10-13-12-44-41-861.png
>
>
> !image-2019-10-13-12-44-41-861.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-697) Rsc client cannot resolve the hostname of driver in yarn-cluster mode

2019-10-12 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-697:

Summary: Rsc client cannot resolve the hostname of driver in yarn-cluster 
mode  (was: rsc client cannot resolve the hostname of driver in yarn-cluster 
mode)

> Rsc client cannot resolve the hostname of driver in yarn-cluster mode
> -
>
> Key: LIVY-697
> URL: https://issues.apache.org/jira/browse/LIVY-697
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-10-13-12-44-41-861.png
>
>
> !image-2019-10-13-12-44-41-861.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-732) A Common Zookeeper Wrapper Utility

2019-12-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-732:

Description: 
Currently, the utilities of zookeeper mixed with ZooKeeperStateStore. To use 
the utility of zookeeper, I have to create a ZooKeeperStateStore which looks 
weird.

This Jira aims to achieve two targets:

1.  Extract the utilities of zookeeper from ZooKeeperStateStore to support such 
as distributed lock, service discovery and so on.

2.  ZooKeeperManager which contains the utilities of zookeeper should be a 
single instance.

  was:Currently, the utility of zookeeper mixed with ZooKeeperStateStore, and 
it's weird.


> A Common Zookeeper Wrapper Utility 
> ---
>
> Key: LIVY-732
> URL: https://issues.apache.org/jira/browse/LIVY-732
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>
> Currently, the utilities of zookeeper mixed with ZooKeeperStateStore. To use 
> the utility of zookeeper, I have to create a ZooKeeperStateStore which looks 
> weird.
> This Jira aims to achieve two targets:
> 1.  Extract the utilities of zookeeper from ZooKeeperStateStore to support 
> such as distributed lock, service discovery and so on.
> 2.  ZooKeeperManager which contains the utilities of zookeeper should be a 
> single instance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-735) Fix RPC Channel Closed When Multi Clients Connect to One Driver

2019-12-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-735:

Description: 
Currently, the driver tries to support communicating with multi-clients, by 
registering each client at 
https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/driver/RSCDriver.java#L220.

But actually, if multi-clients connect to one driver, the rpc channel will 
close, the reason are as follows.

1. In every communication, client sends two packages to driver: header\{type, 
id}, and payload at 
https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/rpc/RpcDispatcher.java#L144.

2. If client1 sends header1, payload1, and client2 sends header2, payload2 at 
the same time. 
 The driver receives the package in the order: header1, header2, payload1, 
payload2.

3. When driver receives header1, driver assigns lastHeader at 
https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/rpc/RpcDispatcher.java#L73.

4. Then driver receives header2, driver process it as a payload at 
https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/rpc/RpcDispatcher.java#L78
 which cause exception and rpc channel closed.

In the muti-active HA mode, the design doc is at: 
https://docs.google.com/document/d/1bD3qYZpw14_NuCcSGUOfqQ0pqvSbCQsOLFuZp26Ohjc/edit?usp=sharing,
 the session is allocated among servers by consistent hashing. If a new livy 
joins, some session will be migrated from old livy to new livy. If the session 
client in new livy connect to driver before stoping session client in old livy, 
then two session clients will both connect to driver, and rpc channel close. In 
this case, it's hard to ensure only one client connect to one driver at any 
time. So it's better to support multi-clients connect to one driver, which has 
no side effects.

How to fix:
1. Move the code of processing client message from `RpcDispatcher` to each 
`Rpc`.
2. Each `Rpc` registers itself to `channelRpc` in RpcDispatcher.
3. `RpcDispatcher` dispatches each message to `Rpc` according to 
`ctx.channel()`.

> Fix RPC Channel Closed When Multi Clients Connect to One Driver 
> 
>
> Key: LIVY-735
> URL: https://issues.apache.org/jira/browse/LIVY-735
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, the driver tries to support communicating with multi-clients, by 
> registering each client at 
> https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/driver/RSCDriver.java#L220.
> But actually, if multi-clients connect to one driver, the rpc channel will 
> close, the reason are as follows.
> 1. In every communication, client sends two packages to driver: header\{type, 
> id}, and payload at 
> https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/rpc/RpcDispatcher.java#L144.
> 2. If client1 sends header1, payload1, and client2 sends header2, payload2 at 
> the same time. 
>  The driver receives the package in the order: header1, header2, payload1, 
> payload2.
> 3. When driver receives header1, driver assigns lastHeader at 
> https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/rpc/RpcDispatcher.java#L73.
> 4. Then driver receives header2, driver process it as a payload at 
> https://github.com/apache/incubator-livy/blob/master/rsc/src/main/java/org/apache/livy/rsc/rpc/RpcDispatcher.java#L78
>  which cause exception and rpc channel closed.
> In the muti-active HA mode, the design doc is at: 
> https://docs.google.com/document/d/1bD3qYZpw14_NuCcSGUOfqQ0pqvSbCQsOLFuZp26Ohjc/edit?usp=sharing,
>  the session is allocated among servers by consistent hashing. If a new livy 
> joins, some session will be migrated from old livy to new livy. If the 
> session client in new livy connect to driver before stoping session client in 
> old livy, then two session clients will both connect to driver, and rpc 
> channel close. In this case, it's hard to ensure only one client connect to 
> one driver at any time. So it's better to support multi-clients connect to 
> one driver, which has no side effects.
> How to fix:
> 1. Move the code of processing client message from `RpcDispatcher` to each 
> `Rpc`.
> 2. Each `Rpc` registers itself to `channelRpc` in RpcDispatcher.
> 3. `RpcDispatcher` dispatches each message to `Rpc` according to 
> `ctx.channel()`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-732) A Common Zookeeper Wrapper Utility

2019-12-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-732:

Description: 
Currently, the utilities of zookeeper mixed with ZooKeeperStateStore. To use 
the utility of zookeeper, the instance of ZooKeeperStateStore has to be created 
, which looks weird.

This Jira aims to achieve two targets:

1.  Extract the utilities of zookeeper from ZooKeeperStateStore to support such 
as distributed lock, service discovery and so on.

2.  ZooKeeperManager which contains the utilities of zookeeper should be a 
single instance.

  was:
Currently, the utilities of zookeeper mixed with ZooKeeperStateStore. To use 
the utility of zookeeper, I have to create a ZooKeeperStateStore which looks 
weird.

This Jira aims to achieve two targets:

1.  Extract the utilities of zookeeper from ZooKeeperStateStore to support such 
as distributed lock, service discovery and so on.

2.  ZooKeeperManager which contains the utilities of zookeeper should be a 
single instance.


> A Common Zookeeper Wrapper Utility 
> ---
>
> Key: LIVY-732
> URL: https://issues.apache.org/jira/browse/LIVY-732
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>
> Currently, the utilities of zookeeper mixed with ZooKeeperStateStore. To use 
> the utility of zookeeper, the instance of ZooKeeperStateStore has to be 
> created , which looks weird.
> This Jira aims to achieve two targets:
> 1.  Extract the utilities of zookeeper from ZooKeeperStateStore to support 
> such as distributed lock, service discovery and so on.
> 2.  ZooKeeperManager which contains the utilities of zookeeper should be a 
> single instance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (LIVY-730) Travis failed because of RSCClient instance stopped

2019-12-18 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999628#comment-16999628
 ] 

runzhiwang commented on LIVY-730:
-

I'm working on it.

> Travis failed because of RSCClient instance stopped
> ---
>
> Key: LIVY-730
> URL: https://issues.apache.org/jira/browse/LIVY-730
> Project: Livy
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.6.0
>Reporter: runzhiwang
>Priority: Major
> Attachments: image-2019-12-19-08-56-17-456.png
>
>
> !image-2019-12-19-08-56-17-456.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (LIVY-730) Travis failed because of RSCClient instance stopped

2019-12-18 Thread runzhiwang (Jira)
runzhiwang created LIVY-730:
---

 Summary: Travis failed because of RSCClient instance stopped
 Key: LIVY-730
 URL: https://issues.apache.org/jira/browse/LIVY-730
 Project: Livy
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.6.0
Reporter: runzhiwang
 Attachments: image-2019-12-19-08-56-17-456.png

!image-2019-12-19-08-56-17-456.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-732) A Common Zookeeper Wrapper Utility

2019-12-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-732:

Description: Currently, the utility of zookeeper mixed with 
ZooKeeperStateStore. 

> A Common Zookeeper Wrapper Utility 
> ---
>
> Key: LIVY-732
> URL: https://issues.apache.org/jira/browse/LIVY-732
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>
> Currently, the utility of zookeeper mixed with ZooKeeperStateStore. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-732) A Common Zookeeper Wrapper Utility

2019-12-19 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-732:

Description: Currently, the utility of zookeeper mixed with 
ZooKeeperStateStore, and it's weird.  (was: Currently, the utility of zookeeper 
mixed with ZooKeeperStateStore. )

> A Common Zookeeper Wrapper Utility 
> ---
>
> Key: LIVY-732
> URL: https://issues.apache.org/jira/browse/LIVY-732
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>
> Currently, the utility of zookeeper mixed with ZooKeeperStateStore, and it's 
> weird.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-720) NoSuchElementException caused when reading from hdfs submitted via livy programmatic api

2019-12-09 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-720:

Attachment: FailingLivySparkJob.zip

> NoSuchElementException caused when reading from hdfs  submitted via livy 
> programmatic api
> -
>
> Key: LIVY-720
> URL: https://issues.apache.org/jira/browse/LIVY-720
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
> Environment: Using a docker container on windows 10: 
> https://hub.docker.com/r/cheathwood/hadoop-spark-livy
>Reporter: Stephen Jenkins
>Priority: Blocker
> Attachments: FailingLivySparkJob.zip
>
>
> Hi,
>  
> I've been using the Livy programmatic api to submit spark jobs written in 
> scala and I've ran into a strange issue. I'm using case classes to wrap the 
> parameters I want to send over to spark, then within the job I manipulate 
> them to be used for different parts of the job. However, it seems whenever I 
> try read and collect data from hdfs I get the following error:
> {code:java}
> java.util.NoSuchElementException: head of empty list
>   at scala.collection.immutable.Nil$.head(List.scala:420)
>   at scala.collection.immutable.Nil$.head(List.scala:417)
>   at scala.collection.immutable.List.map(List.scala:277)
>   at 
> scala.reflect.internal.Symbols$Symbol.parentSymbols(Symbols.scala:2117)
>   at 
> scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:301)
>   at 
> scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:341)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply$mcV$sp(SymbolLoaders.scala:74)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply(SymbolLoaders.scala:71)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply(SymbolLoaders.scala:71)
>   at 
> scala.reflect.internal.SymbolTable.slowButSafeEnteringPhaseNotLaterThan(SymbolTable.scala:263)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType.complete(SymbolLoaders.scala:71)
>   at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1514)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.scala$reflect$runtime$SynchronizedSymbols$SynchronizedSymbol$$super$info(SynchronizedSymbols.scala:174)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
>   at scala.reflect.runtime.Gil$class.gilSynchronized(Gil.scala:19)
>   at 
> scala.reflect.runtime.JavaUniverse.gilSynchronized(JavaUniverse.scala:16)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:123)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:174)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.info(SynchronizedSymbols.scala:127)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.info(SynchronizedSymbols.scala:174)
>   at scala.reflect.internal.Types$TypeRef.thisInfo(Types.scala:2194)
>   at scala.reflect.internal.Types$TypeRef.baseClasses(Types.scala:2199)
>   at 
> scala.reflect.internal.tpe.FindMembers$FindMemberBase.(FindMembers.scala:17)
>   at 
> scala.reflect.internal.tpe.FindMembers$FindMember.(FindMembers.scala:219)
>   at 
> scala.reflect.internal.Types$Type.scala$reflect$internal$Types$Type$$findMemberInternal$1(Types.scala:1014)
>   at scala.reflect.internal.Types$Type.findMember(Types.scala:1016)
>   at scala.reflect.internal.Types$Type.memberBasedOnName(Types.scala:631)
>   at scala.reflect.internal.Types$Type.member(Types.scala:600)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.staticPackage(Mirrors.scala:204)
>   at 
> scala.reflect.runtime.JavaMirrors$JavaMirror.staticPackage(JavaMirrors.scala:82)
>   at scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:263)
>   at 
> scala.reflect.runtime.JavaMirrors$class.scala$reflect$runtime$JavaMirrors$$createMirror(JavaMirrors.scala:32)
>   at 
> scala.reflect.runtime.JavaMirrors$$anonfun$runtimeMirror$1.apply(JavaMirrors.scala:49)
>   at 
> scala.reflect.runtime.JavaMirrors$$anonfun$runtimeMirror$1.apply(JavaMirrors.scala:47)
>   

[jira] [Comment Edited] (LIVY-720) NoSuchElementException caused when reading from hdfs submitted via livy programmatic api

2019-12-09 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/LIVY-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987467#comment-16987467
 ] 

runzhiwang edited comment on LIVY-720 at 12/10/19 6:56 AM:
---

[~steviej08] Hi, I think the cause of the problem is your scala version. I have 
simplified your FailingLivySparkJob, and change the scala version in pom.xml, 
and it works fine. You can find it in the attached file.


was (Author: runzhiwang):
I will work on it recently. 

> NoSuchElementException caused when reading from hdfs  submitted via livy 
> programmatic api
> -
>
> Key: LIVY-720
> URL: https://issues.apache.org/jira/browse/LIVY-720
> Project: Livy
>  Issue Type: Bug
>  Components: RSC
>Affects Versions: 0.6.0
> Environment: Using a docker container on windows 10: 
> https://hub.docker.com/r/cheathwood/hadoop-spark-livy
>Reporter: Stephen Jenkins
>Priority: Blocker
>
> Hi,
>  
> I've been using the Livy programmatic api to submit spark jobs written in 
> scala and I've ran into a strange issue. I'm using case classes to wrap the 
> parameters I want to send over to spark, then within the job I manipulate 
> them to be used for different parts of the job. However, it seems whenever I 
> try read and collect data from hdfs I get the following error:
> {code:java}
> java.util.NoSuchElementException: head of empty list
>   at scala.collection.immutable.Nil$.head(List.scala:420)
>   at scala.collection.immutable.Nil$.head(List.scala:417)
>   at scala.collection.immutable.List.map(List.scala:277)
>   at 
> scala.reflect.internal.Symbols$Symbol.parentSymbols(Symbols.scala:2117)
>   at 
> scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:301)
>   at 
> scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:341)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply$mcV$sp(SymbolLoaders.scala:74)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply(SymbolLoaders.scala:71)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType$$anonfun$complete$2.apply(SymbolLoaders.scala:71)
>   at 
> scala.reflect.internal.SymbolTable.slowButSafeEnteringPhaseNotLaterThan(SymbolTable.scala:263)
>   at 
> scala.reflect.runtime.SymbolLoaders$LazyPackageType.complete(SymbolLoaders.scala:71)
>   at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1514)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.scala$reflect$runtime$SynchronizedSymbols$SynchronizedSymbol$$super$info(SynchronizedSymbols.scala:174)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127)
>   at scala.reflect.runtime.Gil$class.gilSynchronized(Gil.scala:19)
>   at 
> scala.reflect.runtime.JavaUniverse.gilSynchronized(JavaUniverse.scala:16)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:123)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:174)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.info(SynchronizedSymbols.scala:127)
>   at 
> scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$1.info(SynchronizedSymbols.scala:174)
>   at scala.reflect.internal.Types$TypeRef.thisInfo(Types.scala:2194)
>   at scala.reflect.internal.Types$TypeRef.baseClasses(Types.scala:2199)
>   at 
> scala.reflect.internal.tpe.FindMembers$FindMemberBase.(FindMembers.scala:17)
>   at 
> scala.reflect.internal.tpe.FindMembers$FindMember.(FindMembers.scala:219)
>   at 
> scala.reflect.internal.Types$Type.scala$reflect$internal$Types$Type$$findMemberInternal$1(Types.scala:1014)
>   at scala.reflect.internal.Types$Type.findMember(Types.scala:1016)
>   at scala.reflect.internal.Types$Type.memberBasedOnName(Types.scala:631)
>   at scala.reflect.internal.Types$Type.member(Types.scala:600)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
>   at 
> scala.reflect.internal.Mirrors$RootsBase.staticPackage(Mirrors.scala:204)
>   at 
> scala.reflect.runtime.JavaMirrors$JavaMirror.staticPackage(JavaMirrors.scala:82)
>   at scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:263)
>   at 
> 

[jira] [Updated] (LIVY-721) Distributed Session ID Generation

2020-01-14 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-721:

Description: 
# When generate unique session id with multiple-active HA mode. First, get the 
distributed lock,
 Second, get the session id from filesystem or zookeeper. Third, increase the 
session id and save it in the filesystem or zookeeper. Forth, release the 
distributed lock.
 # ZooKeeperManager provides the distributed lock to generate the distributed 
session id.

  was:
# When generate unique session id with multiple-active HA mode. First, get the 
distributed lock,
Second, get the session id from filesystem or zookeeper. Third, increase the 
session id and save it in the filesystem or zookeeper. Forth, release the 
distributed lock.

 # ZooKeeperManager provides the distributed lock to generate the distributed 
session id.


> Distributed Session ID Generation
> -
>
> Key: LIVY-721
> URL: https://issues.apache.org/jira/browse/LIVY-721
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>
> # When generate unique session id with multiple-active HA mode. First, get 
> the distributed lock,
>  Second, get the session id from filesystem or zookeeper. Third, increase the 
> session id and save it in the filesystem or zookeeper. Forth, release the 
> distributed lock.
>  # ZooKeeperManager provides the distributed lock to generate the distributed 
> session id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (LIVY-721) Distributed Session ID Generation

2020-01-14 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/LIVY-721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated LIVY-721:

Description: 
# When generate unique session id with multiple-active HA mode. First, get the 
distributed lock,
Second, get the session id from filesystem or zookeeper. Third, increase the 
session id and save it in the filesystem or zookeeper. Forth, release the 
distributed lock.

 # ZooKeeperManager provides the distributed lock to generate the distributed 
session id.

> Distributed Session ID Generation
> -
>
> Key: LIVY-721
> URL: https://issues.apache.org/jira/browse/LIVY-721
> Project: Livy
>  Issue Type: Sub-task
>Reporter: Yiheng Wang
>Priority: Major
>
> # When generate unique session id with multiple-active HA mode. First, get 
> the distributed lock,
> Second, get the session id from filesystem or zookeeper. Third, increase the 
> session id and save it in the filesystem or zookeeper. Forth, release the 
> distributed lock.
>  # ZooKeeperManager provides the distributed lock to generate the distributed 
> session id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)