[GitHub] felixcheung commented on a change in pull request #3301: ZEPPELIN-3975. Add limit local property for SparkInterpreter and JdbcInterpreter
felixcheung commented on a change in pull request #3301: ZEPPELIN-3975. Add limit local property for SparkInterpreter and JdbcInterpreter URL: https://github.com/apache/zeppelin/pull/3301#discussion_r255386737 ## File path: spark/interpreter/src/main/java/org/apache/zeppelin/spark/SparkSqlInterpreter.java ## @@ -86,8 +86,10 @@ public InterpreterResult internalInterpret(String st, InterpreterContext context try { Method method = sqlc.getClass().getMethod("sql", String.class); + int maxResult = Integer.parseInt(context.getLocalProperties().getOrDefault("limit", + "" + sparkInterpreter.getZeppelinContext().getMaxResult())); Review comment: same here? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] felixcheung commented on a change in pull request #3301: ZEPPELIN-3975. Add limit local property for SparkInterpreter and JdbcInterpreter
felixcheung commented on a change in pull request #3301: ZEPPELIN-3975. Add limit local property for SparkInterpreter and JdbcInterpreter URL: https://github.com/apache/zeppelin/pull/3301#discussion_r253290767 ## File path: jdbc/src/main/java/org/apache/zeppelin/jdbc/JDBCInterpreter.java ## @@ -721,8 +721,10 @@ private InterpreterResult executeSql(String propertyKey, String sql, statement = connection.createStatement(); // fetch n+1 rows in order to indicate there's more rows available (for large selects) -statement.setFetchSize(getMaxResult()); -statement.setMaxRows(maxRows); +statement.setFetchSize(Integer.parseInt(interpreterContext +.getLocalProperties().getOrDefault("limit", "" + getMaxResult(; Review comment: nit: I think this pattern of int -> string -> int is a bit odd, perhaps helper func can help here `Integer.parseInt(interpreterContext .getLocalProperties().getOrDefault("limit", "" + getMaxResult()` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] felixcheung commented on a change in pull request #3301: ZEPPELIN-3975. Add limit local property for SparkInterpreter and JdbcInterpreter
felixcheung commented on a change in pull request #3301: ZEPPELIN-3975. Add limit local property for SparkInterpreter and JdbcInterpreter URL: https://github.com/apache/zeppelin/pull/3301#discussion_r253290730 ## File path: spark/spark1-shims/src/main/scala/org/apache/zeppelin/spark/Spark1Shims.java ## @@ -56,23 +56,27 @@ public String showDataFrame(Object obj, int maxResult) { if (obj instanceof DataFrame) { DataFrame df = (DataFrame) obj; String[] columns = df.columns(); + // fetch maxResult+1 rows so that we can check whether it is larger than zeppelin.spark.maxResult List rows = df.takeAsList(maxResult + 1); - StringBuilder msg = new StringBuilder(); msg.append("%table "); msg.append(StringUtils.join(columns, "\t")); msg.append("\n"); + boolean isLargerThanMaxResult = rows.size() > maxResult; + if (isLargerThanMaxResult) { +rows = rows.subList(0, maxResult); + } for (Row row : rows) { for (int i = 0; i < row.size(); ++i) { msg.append(row.get(i)); - if (i != row.size() - 1) { + if (i != row.size() -1) { Review comment: space after `-` and before `1`? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services