jnturton commented on code in PR #2641: URL: https://github.com/apache/drill/pull/2641#discussion_r964935897
########## contrib/storage-http/src/main/java/org/apache/drill/exec/store/http/udfs/HttpHelperFunctions.java: ########## @@ -189,6 +191,8 @@ public void eval() { rowWriter.start(); if (jsonLoader.parser().next()) { rowWriter.save(); + } else { Review Comment: @cgivre yes, you're right. I tried a couple of things. First I provided a JSON response that would normally produce 64k+1 rows if queried to http_get but it looked to me like it was being handled in a single batch since, I guess, the row count of a query based on VALUES(1) is still 1. I then wrote a query to `SELECT http_get(some simple JSON)` from a mock table containing 64k+1 rows. This overwhelms the okhttp3 mock server and fails with a timeout. I'm not sure if there some other test to try here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@drill.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org