alexott commented on a change in pull request #3579: [ZEPPELIN-4522]. Support 
multiple sql statements for SparkSqlInterpreter
URL: https://github.com/apache/zeppelin/pull/3579#discussion_r361794139
 
 

 ##########
 File path: 
spark/interpreter/src/main/java/org/apache/zeppelin/spark/SparkSqlInterpreter.java
 ##########
 @@ -82,26 +85,35 @@ public InterpreterResult internalInterpret(String st, 
InterpreterContext context
     sparkInterpreter.getZeppelinContext().setInterpreterContext(context);
     SQLContext sqlc = sparkInterpreter.getSQLContext();
     SparkContext sc = sqlc.sparkContext();
-    sc.setLocalProperty("spark.scheduler.pool", 
context.getLocalProperties().get("pool"));
-    sc.setJobGroup(Utils.buildJobGroupId(context), 
Utils.buildJobDesc(context), false);
-
-    try {
-      Method method = sqlc.getClass().getMethod("sql", String.class);
-      int maxResult = 
Integer.parseInt(context.getLocalProperties().getOrDefault("limit",
-              "" + sparkInterpreter.getZeppelinContext().getMaxResult()));
-      String msg = sparkInterpreter.getZeppelinContext().showData(
-          method.invoke(sqlc, st), maxResult);
-      sc.clearJobGroup();
-      return new InterpreterResult(Code.SUCCESS, msg);
-    } catch (Exception e) {
-      if (Boolean.parseBoolean(getProperty("zeppelin.spark.sql.stacktrace"))) {
-        return new InterpreterResult(Code.ERROR, 
ExceptionUtils.getStackTrace(e));
+
+    StringBuilder builder = new StringBuilder();
+    List<String> sqls = sqlSplitter.splitSql(st);
+    for (String sql : sqls) {
+      sc.setLocalProperty("spark.scheduler.pool", 
context.getLocalProperties().get("pool"));
+      sc.setJobGroup(Utils.buildJobGroupId(context), 
Utils.buildJobDesc(context), false);
+
+      try {
+        Method method = sqlc.getClass().getMethod("sql", String.class);
+        int maxResult = 
Integer.parseInt(context.getLocalProperties().getOrDefault("limit",
+                "" + sparkInterpreter.getZeppelinContext().getMaxResult()));
+        String result = sparkInterpreter.getZeppelinContext().showData(
+                method.invoke(sqlc, sql), maxResult);
+        sc.clearJobGroup();
+        builder.append(result);
+      } catch (Exception e) {
+        if 
(Boolean.parseBoolean(getProperty("zeppelin.spark.sql.stacktrace"))) {
 
 Review comment:
   Maybe output the SQL clause that caused error as part of the error, in 
addition to exception? I don't remember, if Spark SQL always shows the original 
SQL, or not.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to