[jira] [Updated] (DRILL-7222) Visualize estimated and actual row counts for a query

2019-08-21 Thread Kunal Khatua (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7222:

Labels: doc-impacting ready-to-commit user-experience  (was: doc-impacting 
user-experience)

> Visualize estimated and actual row counts for a query
> -
>
> Key: DRILL-7222
> URL: https://issues.apache.org/jira/browse/DRILL-7222
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-impacting, ready-to-commit, user-experience
> Fix For: 1.17.0
>
>
> With statistics in place, it would be useful to have the *estimated* rowcount 
> along side the *actual* rowcount query profile's operator overview.
> We can extract this from the Physical Plan section of the profile.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (DRILL-7192) Drill limits rows when autoLimit is disabled

2019-08-19 Thread Kunal Khatua (Jira)


[ 
https://issues.apache.org/jira/browse/DRILL-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910750#comment-16910750
 ] 

Kunal Khatua commented on DRILL-7192:
-

[~vvysotskyi] there is a `!reset rowlimit` option that takes care of the `!set 
rowlimit` option for SqlLine. I'll see if there is something that will apply 
this for  generic JDBC client as well. I suspect that setting an explicit zero 
value will not be applied if there is a non-zero value already set (even if 
temporary).


> Drill limits rows when autoLimit is disabled
> 
>
> Key: DRILL-7192
> URL: https://issues.apache.org/jira/browse/DRILL-7192
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> In DRILL-7048 was implemented autoLimit for JDBC and rest clients.
> *Steps to reproduce the issue:*
>  1. Check that autoLimit was disabled, if not, disable it and restart Drill.
>  2. Submit any query, and verify that rows count is correct, for example,
> {code:sql}
> SELECT * FROM cp.`employee.json`;
> {code}
> returns 1,155 rows
>  3. Enable autoLimit for sqlLine sqlLine client:
> {code:sql}
> !set rowLimit 10
> {code}
> 4. Submit the same query and verify that the result has 10 rows.
>  5. Disable autoLimit:
> {code:sql}
> !set rowLimit 0
> {code}
> 6. Submit the same query, but for this time, *it returns 10 rows instead of 
> 1,155*.
> Correct rows count is returned only after creating a new connection.
> The same issue is also observed for SQuirreL SQL client, but for example, for 
> Postgres, it works correctly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (DRILL-7338) REST API calls to Drill fail due to insufficient heap memory

2019-08-06 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua resolved DRILL-7338.
-
Resolution: Fixed
  Reviewer: Arina Ielchiieva

> REST API calls to Drill fail due to insufficient heap memory
> 
>
> Key: DRILL-7338
> URL: https://issues.apache.org/jira/browse/DRILL-7338
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Aditya Allamraju
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> Drill queries that use REST API calls have started failing(given below) after 
> recent changes.
> {code:java}
> RESOURCE ERROR: There is not enough heap memory to run this query using the 
> web interface.
> Please try a query with fewer columns or with a filter or limit condition to 
> limit the data returned.
> You can also try an ODBC/JDBC client.{code}
> They were running fine earlier as the ResultSet returned was just few rows. 
> These queries now fail for even very small resultSets( < 10rows).
> Investigating the issue revealed that we introduced a check to limit the Heap 
> usage.
> The Wrapper code from 
> *_exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java_*
>   that throws this error, i see certain issues. It does seem we use a 
> threshold of *85%* of heap usage before throwing that warning and exiting the 
> query.
>  
> {code:java}
> public class QueryWrapper {
>   private static final org.slf4j.Logger logger = 
> org.slf4j.LoggerFactory.getLogger(QueryWrapper.class);
>   // Heap usage threshold/trigger to provide resiliency on web server for 
> queries submitted via HTTP
>   private static final double HEAP_MEMORY_FAILURE_THRESHOLD = 0.85;
> ...
>   private static MemoryMXBean memMXBean = ManagementFactory.getMemoryMXBean();
> ...
>   // Wait until the query execution is complete or there is error submitting 
> the query
> logger.debug("Wait until the query execution is complete or there is 
> error submitting the query");
> do {
>   try {
> isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
> //periodically timeout 1 sec to check heap
>   } catch (InterruptedException e) {}
>   usagePercent = getHeapUsage();
>   if (usagePercent >  HEAP_MEMORY_FAILURE_THRESHOLD) {
> nearlyOutOfHeapSpace = true;
>   }
> } while (!isComplete && !nearlyOutOfHeapSpace);
> {code}
> By using above check, we unintentionally invited all those issues that happen 
> with Java’s Heap usage. JVM does try to make maximum usage of HEAP until 
> Minor or Major GC kicks in i.e GC kicks after there is no more space left in 
> heap(eden or young gen).
> The workarounds i can think of in order to resolve this issue are:
>  # Remove this check altogether so we know why it is filling up Heap.
>  # Advise the users to stop using REST for querying data.(We did this 
> already). *But not all users may not be happy with this suggestion.* There 
> could be few dynamic applications(dashboard, monitoring etc).
>  # Make the threshold high enough so that GC kicks in much better.
> If not above options, we have to tune the Heap sizes of drillbit. A quick fix 
> would be to increase the threshold from 85% to 100%(option-3 above).
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Reopened] (DRILL-7338) REST API calls to Drill fail due to insufficient heap memory

2019-08-06 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reopened DRILL-7338:
-

> REST API calls to Drill fail due to insufficient heap memory
> 
>
> Key: DRILL-7338
> URL: https://issues.apache.org/jira/browse/DRILL-7338
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Aditya Allamraju
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> Drill queries that use REST API calls have started failing(given below) after 
> recent changes.
> {code:java}
> RESOURCE ERROR: There is not enough heap memory to run this query using the 
> web interface.
> Please try a query with fewer columns or with a filter or limit condition to 
> limit the data returned.
> You can also try an ODBC/JDBC client.{code}
> They were running fine earlier as the ResultSet returned was just few rows. 
> These queries now fail for even very small resultSets( < 10rows).
> Investigating the issue revealed that we introduced a check to limit the Heap 
> usage.
> The Wrapper code from 
> *_exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java_*
>   that throws this error, i see certain issues. It does seem we use a 
> threshold of *85%* of heap usage before throwing that warning and exiting the 
> query.
>  
> {code:java}
> public class QueryWrapper {
>   private static final org.slf4j.Logger logger = 
> org.slf4j.LoggerFactory.getLogger(QueryWrapper.class);
>   // Heap usage threshold/trigger to provide resiliency on web server for 
> queries submitted via HTTP
>   private static final double HEAP_MEMORY_FAILURE_THRESHOLD = 0.85;
> ...
>   private static MemoryMXBean memMXBean = ManagementFactory.getMemoryMXBean();
> ...
>   // Wait until the query execution is complete or there is error submitting 
> the query
> logger.debug("Wait until the query execution is complete or there is 
> error submitting the query");
> do {
>   try {
> isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
> //periodically timeout 1 sec to check heap
>   } catch (InterruptedException e) {}
>   usagePercent = getHeapUsage();
>   if (usagePercent >  HEAP_MEMORY_FAILURE_THRESHOLD) {
> nearlyOutOfHeapSpace = true;
>   }
> } while (!isComplete && !nearlyOutOfHeapSpace);
> {code}
> By using above check, we unintentionally invited all those issues that happen 
> with Java’s Heap usage. JVM does try to make maximum usage of HEAP until 
> Minor or Major GC kicks in i.e GC kicks after there is no more space left in 
> heap(eden or young gen).
> The workarounds i can think of in order to resolve this issue are:
>  # Remove this check altogether so we know why it is filling up Heap.
>  # Advise the users to stop using REST for querying data.(We did this 
> already). *But not all users may not be happy with this suggestion.* There 
> could be few dynamic applications(dashboard, monitoring etc).
>  # Make the threshold high enough so that GC kicks in much better.
> If not above options, we have to tune the Heap sizes of drillbit. A quick fix 
> would be to increase the threshold from 85% to 100%(option-3 above).
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (DRILL-7338) REST API calls to Drill fail due to insufficient heap memory

2019-08-05 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7338:

Fix Version/s: 1.17.0

> REST API calls to Drill fail due to insufficient heap memory
> 
>
> Key: DRILL-7338
> URL: https://issues.apache.org/jira/browse/DRILL-7338
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Aditya Allamraju
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> Drill queries that use REST API calls have started failing(given below) after 
> recent changes.
> {code:java}
> RESOURCE ERROR: There is not enough heap memory to run this query using the 
> web interface.
> Please try a query with fewer columns or with a filter or limit condition to 
> limit the data returned.
> You can also try an ODBC/JDBC client.{code}
> They were running fine earlier as the ResultSet returned was just few rows. 
> These queries now fail for even very small resultSets( < 10rows).
> Investigating the issue revealed that we introduced a check to limit the Heap 
> usage.
> The Wrapper code from 
> *_exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java_*
>   that throws this error, i see certain issues. It does seem we use a 
> threshold of *85%* of heap usage before throwing that warning and exiting the 
> query.
>  
> {code:java}
> public class QueryWrapper {
>   private static final org.slf4j.Logger logger = 
> org.slf4j.LoggerFactory.getLogger(QueryWrapper.class);
>   // Heap usage threshold/trigger to provide resiliency on web server for 
> queries submitted via HTTP
>   private static final double HEAP_MEMORY_FAILURE_THRESHOLD = 0.85;
> ...
>   private static MemoryMXBean memMXBean = ManagementFactory.getMemoryMXBean();
> ...
>   // Wait until the query execution is complete or there is error submitting 
> the query
> logger.debug("Wait until the query execution is complete or there is 
> error submitting the query");
> do {
>   try {
> isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
> //periodically timeout 1 sec to check heap
>   } catch (InterruptedException e) {}
>   usagePercent = getHeapUsage();
>   if (usagePercent >  HEAP_MEMORY_FAILURE_THRESHOLD) {
> nearlyOutOfHeapSpace = true;
>   }
> } while (!isComplete && !nearlyOutOfHeapSpace);
> {code}
> By using above check, we unintentionally invited all those issues that happen 
> with Java’s Heap usage. JVM does try to make maximum usage of HEAP until 
> Minor or Major GC kicks in i.e GC kicks after there is no more space left in 
> heap(eden or young gen).
> The workarounds i can think of in order to resolve this issue are:
>  # Remove this check altogether so we know why it is filling up Heap.
>  # Advise the users to stop using REST for querying data.(We did this 
> already). *But not all users may not be happy with this suggestion.* There 
> could be few dynamic applications(dashboard, monitoring etc).
>  # Make the threshold high enough so that GC kicks in much better.
> If not above options, we have to tune the Heap sizes of drillbit. A quick fix 
> would be to increase the threshold from 85% to 100%(option-3 above).
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (DRILL-7338) REST API calls to Drill fail due to insufficient heap memory

2019-08-05 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reassigned DRILL-7338:
---

Assignee: Kunal Khatua

> REST API calls to Drill fail due to insufficient heap memory
> 
>
> Key: DRILL-7338
> URL: https://issues.apache.org/jira/browse/DRILL-7338
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Aditya Allamraju
>Assignee: Kunal Khatua
>Priority: Major
>
> Drill queries that use REST API calls have started failing(given below) after 
> recent changes.
> {code:java}
> RESOURCE ERROR: There is not enough heap memory to run this query using the 
> web interface.
> Please try a query with fewer columns or with a filter or limit condition to 
> limit the data returned.
> You can also try an ODBC/JDBC client.{code}
> They were running fine earlier as the ResultSet returned was just few rows. 
> These queries now fail for even very small resultSets( < 10rows).
> Investigating the issue revealed that we introduced a check to limit the Heap 
> usage.
> The Wrapper code from 
> *_exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/QueryWrapper.java_*
>   that throws this error, i see certain issues. It does seem we use a 
> threshold of *85%* of heap usage before throwing that warning and exiting the 
> query.
>  
> {code:java}
> public class QueryWrapper {
>   private static final org.slf4j.Logger logger = 
> org.slf4j.LoggerFactory.getLogger(QueryWrapper.class);
>   // Heap usage threshold/trigger to provide resiliency on web server for 
> queries submitted via HTTP
>   private static final double HEAP_MEMORY_FAILURE_THRESHOLD = 0.85;
> ...
>   private static MemoryMXBean memMXBean = ManagementFactory.getMemoryMXBean();
> ...
>   // Wait until the query execution is complete or there is error submitting 
> the query
> logger.debug("Wait until the query execution is complete or there is 
> error submitting the query");
> do {
>   try {
> isComplete = webUserConnection.await(TimeUnit.SECONDS.toMillis(1)); 
> //periodically timeout 1 sec to check heap
>   } catch (InterruptedException e) {}
>   usagePercent = getHeapUsage();
>   if (usagePercent >  HEAP_MEMORY_FAILURE_THRESHOLD) {
> nearlyOutOfHeapSpace = true;
>   }
> } while (!isComplete && !nearlyOutOfHeapSpace);
> {code}
> By using above check, we unintentionally invited all those issues that happen 
> with Java’s Heap usage. JVM does try to make maximum usage of HEAP until 
> Minor or Major GC kicks in i.e GC kicks after there is no more space left in 
> heap(eden or young gen).
> The workarounds i can think of in order to resolve this issue are:
>  # Remove this check altogether so we know why it is filling up Heap.
>  # Advise the users to stop using REST for querying data.(We did this 
> already). *But not all users may not be happy with this suggestion.* There 
> could be few dynamic applications(dashboard, monitoring etc).
>  # Make the threshold high enough so that GC kicks in much better.
> If not above options, we have to tune the Heap sizes of drillbit. A quick fix 
> would be to increase the threshold from 85% to 100%(option-3 above).
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (DRILL-7203) Back button for failed query does not return on Query page

2019-05-11 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7203:

Reviewer: Arina Ielchiieva

>  Back button for failed query does not return on Query page
> ---
>
> Key: DRILL-7203
> URL: https://issues.apache.org/jira/browse/DRILL-7203
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
> Attachments: back_button.JPG
>
>
> Back button for failed query returns on previous page before Query page but 
> not on the Query page.
> Steps: 
> 1. go to Logs page
> 2. go to Query page
> 3. execute query with incorrect syntax (ex: x)
> 4. error message will be displayed, Back button will be in left corner 
> (screenshot attached)
> 5. press Back button
> 6. user is redirected to Logs page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7048) Implement JDBC Statement.setMaxRows() with System Option

2019-05-11 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837945#comment-16837945
 ] 

Kunal Khatua commented on DRILL-7048:
-

LGTM +1
Thanks, [~bbevens]

> Implement JDBC Statement.setMaxRows() with System Option
> 
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.16.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7234) Allow support for using Drill WebU through a Reverse Proxy server

2019-05-02 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7234:
---

 Summary: Allow support for using Drill WebU through a Reverse 
Proxy server
 Key: DRILL-7234
 URL: https://issues.apache.org/jira/browse/DRILL-7234
 Project: Apache Drill
  Issue Type: Improvement
  Components: Web Server
Affects Versions: 1.16.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.17.0


Currently, Drill's WebUI has a lot of links and references going through the 
root of the URL.
i.e. to access the profiles listing or submitting a query, we'd need to access 
the following URL format:
{code}
http://localhost:8047/profiles
http://localhost:8047/query
{code}

With a reverse proxy, these pages need to be accessed by:
{code}
http://localhost:8047/x/y/z/profiles
http://localhost:8047/x/y/z/query
{code}

However, the links within these page do not include the *{{x/y/z/}}* part, as a 
result of which the visiting those links will fail.

The WebServer should implement a mechanism that can detect this additional 
layer and modify the links within the webpage accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7222) Visualize estimated and actual row counts for a query

2019-05-02 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7222:

Reviewer: Aman Sinha

> Visualize estimated and actual row counts for a query
> -
>
> Key: DRILL-7222
> URL: https://issues.apache.org/jira/browse/DRILL-7222
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: user-experience
> Fix For: 1.17.0
>
>
> With statistics in place, it would be useful to have the *estimated* rowcount 
> along side the *actual* rowcount query profile's operator overview.
> We can extract this from the Physical Plan section of the profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7226) Compilation error on Windows when building from the release tarball sources

2019-04-30 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7226:

Fix Version/s: 1.17.0

> Compilation error on Windows when building from the release tarball sources
> ---
>
> Key: DRILL-7226
> URL: https://issues.apache.org/jira/browse/DRILL-7226
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Denys Ordynskiy
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
> Attachments: tarball_building.log
>
>
> *Description:*
>  OS - Windows.
>  Downloaded tarball with sources for the 
> [1.15|http://home.apache.org/~vitalii/drill/releases/1.15.0/rc2/apache-drill-1.15.0-src.tar.gz]
>  or 
> [1.16|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc2/apache-drill-1.16.0-src.tar.gz]
>  Drill release.
>  Extracted the sources.
>  Built sources using the following command:
> {noformat}
> mvn clean install -DskipTests -Pmapr
> {noformat}
> *Expected result:*
>  BUILD SUCCESS
> *Actual result:*
> {noformat}
> ...
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> D:\src\rc2\apache-drill-1.16.0-src\protocol\src\main\java\org\apache\drill\exec\proto\beans\RecordBatchDef.java:[53,17]
>  error: cannot find symbol
>   symbol:   class SerializedField
>   location: class RecordBatchDef
> ...
> BUILD FAILURE
> {noformat}
> See "tarball_building.log"
> There are no errors when building sources on Windows from the GitHub release 
> [branch|https://github.com/sohami/drill/commits/drill-1.16.0].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-7226) Compilation error on Windows when building from the release tarball sources

2019-04-30 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830762#comment-16830762
 ] 

Kunal Khatua edited comment on DRILL-7226 at 4/30/19 11:53 PM:
---

[~denysord88] I was able to build on my Windows Pro 10 without any issue (maven 
version = 3.5.2), for the *mapr* profile. I'm, of course, not able to run it 
though, so I tried with the default as well. I was even able to run Drill in 
embedded mode (drill-embedded.bat).


{code}
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Drill Root POM .. SUCCESS [ 28.712 s]
[INFO] tools/Parent Pom ... SUCCESS [  2.214 s]
[INFO] tools/freemarker codegen tooling ... SUCCESS [ 28.078 s]
[INFO] Drill Protocol . SUCCESS [ 50.707 s]
[INFO] Common (Logical Plan, Base expressions)  SUCCESS [ 35.353 s]
[INFO] Logical Plan, Base expressions . SUCCESS [ 50.205 s]
[INFO] exec/Parent Pom  SUCCESS [  3.324 s]
[INFO] exec/memory/Parent Pom . SUCCESS [  2.142 s]
[INFO] exec/memory/base ... SUCCESS [ 21.928 s]
[INFO] exec/rpc ... SUCCESS [ 27.368 s]
[INFO] exec/Vectors ... SUCCESS [04:54 min]
[INFO] contrib/Parent Pom . SUCCESS [  1.361 s]
[INFO] contrib/data/Parent Pom  SUCCESS [  1.748 s]
[INFO] contrib/data/tpch-sample-data .. SUCCESS [ 10.786 s]
[INFO] exec/Java Execution Engine . SUCCESS [12:45 min]
[INFO] exec/JDBC Driver using dependencies  SUCCESS [01:46 min]
[INFO] JDBC JAR with all dependencies . SUCCESS [01:37 min]
[INFO] Drill-on-YARN .. SUCCESS [ 43.917 s]
[INFO] contrib/kudu-storage-plugin  SUCCESS [01:06 min]
[INFO] contrib/opentsdb-storage-plugin  SUCCESS [ 33.760 s]
[INFO] contrib/mongo-storage-plugin ... SUCCESS [ 33.846 s]
[INFO] contrib/hbase-storage-plugin ... SUCCESS [01:41 min]
[INFO] contrib/jdbc-storage-plugin  SUCCESS [ 29.823 s]
[INFO] contrib/hive-storage-plugin/Parent Pom . SUCCESS [  0.982 s]
[INFO] contrib/hive-storage-plugin/hive-exec-shaded ... SUCCESS [06:10 min]
[INFO] contrib/hive-storage-plugin/core ... SUCCESS [01:26 min]
[INFO] contrib/kafka-storage-plugin ... SUCCESS [ 48.280 s]
[INFO] contrib/drill-udfs . SUCCESS [ 39.188 s]
[INFO] contrib/format-syslog .. SUCCESS [ 32.512 s]
[INFO] contrib/ltsv-format-plugin . SUCCESS [ 31.756 s]
[INFO] Packaging and Distribution Assembly  SUCCESS [08:58 min]
[INFO] contrib/mapr-format-plugin . SUCCESS [01:09 min]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 50:59 min
[INFO] Finished at: 2019-04-30T16:47:26-07:00
[INFO] Final Memory: 186M/1740M
[INFO] 
{code}

Do you have protoc installed on your Windows box by any chance? I don't have it 
on mine.


was (Author: kkhatua):
[~denysord88] I was able to build on my Windows Pro 10 without any issue (maven 
version = 3.5.2), for the *mapr* profile. I'm, of course, not able to run it 
though, so I tried with the default as well. I was even able to run Drill in 
embedded mode (drill-embedded.bat).


{code}
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Drill Root POM .. SUCCESS [ 28.712 s]
[INFO] tools/Parent Pom ... SUCCESS [  2.214 s]
[INFO] tools/freemarker codegen tooling ... SUCCESS [ 28.078 s]
[INFO] Drill Protocol . SUCCESS [ 50.707 s]
[INFO] Common (Logical Plan, Base expressions)  SUCCESS [ 35.353 s]
[INFO] Logical Plan, Base expressions . SUCCESS [ 50.205 s]
[INFO] exec/Parent Pom  SUCCESS [  3.324 s]
[INFO] exec/memory/Parent Pom . SUCCESS [  2.142 s]
[INFO] exec/memory/base ... SUCCESS [ 21.928 s]
[INFO] exec/rpc ... SUCCESS [ 27.368 s]
[INFO] exec/Vectors ... SUCCESS [04:54 

[jira] [Commented] (DRILL-7226) Compilation error on Windows when building from the release tarball sources

2019-04-30 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830762#comment-16830762
 ] 

Kunal Khatua commented on DRILL-7226:
-

[~denysord88] I was able to build on my Windows Pro 10 without any issue (maven 
version = 3.5.2), for the *mapr* profile. I'm, of course, not able to run it 
though, so I tried with the default as well. I was even able to run Drill in 
embedded mode (drill-embedded.bat).


{code}
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Drill Root POM .. SUCCESS [ 28.712 s]
[INFO] tools/Parent Pom ... SUCCESS [  2.214 s]
[INFO] tools/freemarker codegen tooling ... SUCCESS [ 28.078 s]
[INFO] Drill Protocol . SUCCESS [ 50.707 s]
[INFO] Common (Logical Plan, Base expressions)  SUCCESS [ 35.353 s]
[INFO] Logical Plan, Base expressions . SUCCESS [ 50.205 s]
[INFO] exec/Parent Pom  SUCCESS [  3.324 s]
[INFO] exec/memory/Parent Pom . SUCCESS [  2.142 s]
[INFO] exec/memory/base ... SUCCESS [ 21.928 s]
[INFO] exec/rpc ... SUCCESS [ 27.368 s]
[INFO] exec/Vectors ... SUCCESS [04:54 min]
[INFO] contrib/Parent Pom . SUCCESS [  1.361 s]
[INFO] contrib/data/Parent Pom  SUCCESS [  1.748 s]
[INFO] contrib/data/tpch-sample-data .. SUCCESS [ 10.786 s]
[INFO] exec/Java Execution Engine . SUCCESS [12:45 min]
[INFO] exec/JDBC Driver using dependencies  SUCCESS [01:46 min]
[INFO] JDBC JAR with all dependencies . SUCCESS [01:37 min]
[INFO] Drill-on-YARN .. SUCCESS [ 43.917 s]
[INFO] contrib/kudu-storage-plugin  SUCCESS [01:06 min]
[INFO] contrib/opentsdb-storage-plugin  SUCCESS [ 33.760 s]
[INFO] contrib/mongo-storage-plugin ... SUCCESS [ 33.846 s]
[INFO] contrib/hbase-storage-plugin ... SUCCESS [01:41 min]
[INFO] contrib/jdbc-storage-plugin  SUCCESS [ 29.823 s]
[INFO] contrib/hive-storage-plugin/Parent Pom . SUCCESS [  0.982 s]
[INFO] contrib/hive-storage-plugin/hive-exec-shaded ... SUCCESS [06:10 min]
[INFO] contrib/hive-storage-plugin/core ... SUCCESS [01:26 min]
[INFO] contrib/kafka-storage-plugin ... SUCCESS [ 48.280 s]
[INFO] contrib/drill-udfs . SUCCESS [ 39.188 s]
[INFO] contrib/format-syslog .. SUCCESS [ 32.512 s]
[INFO] contrib/ltsv-format-plugin . SUCCESS [ 31.756 s]
[INFO] Packaging and Distribution Assembly  SUCCESS [08:58 min]
[INFO] contrib/mapr-format-plugin . SUCCESS [01:09 min]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 50:59 min
[INFO] Finished at: 2019-04-30T16:47:26-07:00
[INFO] Final Memory: 186M/1740M
[INFO] 
{code}

> Compilation error on Windows when building from the release tarball sources
> ---
>
> Key: DRILL-7226
> URL: https://issues.apache.org/jira/browse/DRILL-7226
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Denys Ordynskiy
>Assignee: Kunal Khatua
>Priority: Major
> Attachments: tarball_building.log
>
>
> *Description:*
>  OS - Windows.
>  Downloaded tarball with sources for the 
> [1.15|http://home.apache.org/~vitalii/drill/releases/1.15.0/rc2/apache-drill-1.15.0-src.tar.gz]
>  or 
> [1.16|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc2/apache-drill-1.16.0-src.tar.gz]
>  Drill release.
>  Extracted the sources.
>  Built sources using the following command:
> {noformat}
> mvn clean install -DskipTests -Pmapr
> {noformat}
> *Expected result:*
>  BUILD SUCCESS
> *Actual result:*
> {noformat}
> ...
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> D:\src\rc2\apache-drill-1.16.0-src\protocol\src\main\java\org\apache\drill\exec\proto\beans\RecordBatchDef.java:[53,17]
>  error: cannot find symbol
>   symbol:   class SerializedField
>   location: class RecordBatchDef
> ...
> BUILD FAILURE
> {noformat}
> See "tarball_building.log"
> There are no 

[jira] [Created] (DRILL-7222) Visualize estimated and actual row counts for a query

2019-04-26 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7222:
---

 Summary: Visualize estimated and actual row counts for a query
 Key: DRILL-7222
 URL: https://issues.apache.org/jira/browse/DRILL-7222
 Project: Apache Drill
  Issue Type: Improvement
  Components: Web Server
Affects Versions: 1.16.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.17.0


With statistics in place, it would be useful to have the *estimated* rowcount 
along side the *actual* rowcount query profile's operator overview.

We can extract this from the Physical Plan section of the profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7192) Drill limits rows when autoLimit is disabled

2019-04-26 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16827143#comment-16827143
 ] 

Kunal Khatua commented on DRILL-7192:
-

[~vvysotskyi]

The source of this issue is that we are setting the parameter at a {{SESSION}} 
level, although the feature works at a {{QUERY}} level (i.e. on the 
{{Statement}} object).
Calling {{!set rowlimit 10}} will execute the `Statement.setMaxRows()` 
automatically for each new Statement. 

However, {{!set rowlimit 0}} will *not* execute the `Statement.setMaxRows()` 
automatically for each new Statement. My guess is that since each query's 
Statement is a new object with a presumed default 0 (on the client side), it 
does not again `ALTER SESSION` behind the scene. I'll verify this with a custom 
code, but if that is the case... fixing SQLLine is not the solution. The only 
workaround would be to not rely on the {{SESSION}}-level value, but on the 
RunQuery.getAutolimitRowcount() by ensuring the value is set in the 
{{DrillClient}} 

> Drill limits rows when autoLimit is disabled
> 
>
> Key: DRILL-7192
> URL: https://issues.apache.org/jira/browse/DRILL-7192
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> In DRILL-7048 was implemented autoLimit for JDBC and rest clients.
> *Steps to reproduce the issue:*
>  1. Check that autoLimit was disabled, if not, disable it and restart Drill.
>  2. Submit any query, and verify that rows count is correct, for example,
> {code:sql}
> SELECT * FROM cp.`employee.json`;
> {code}
> returns 1,155 rows
>  3. Enable autoLimit for sqlLine sqlLine client:
> {code:sql}
> !set rowLimit 10
> {code}
> 4. Submit the same query and verify that the result has 10 rows.
>  5. Disable autoLimit:
> {code:sql}
> !set rowLimit 0
> {code}
> 6. Submit the same query, but for this time, *it returns 10 rows instead of 
> 1,155*.
> Correct rows count is returned only after creating a new connection.
> The same issue is also observed for SQuirreL SQL client, but for example, for 
> Postgres, it works correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7202) Failed query shows warning that fragments has made no progress

2019-04-25 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7202:

Fix Version/s: (was: 1.17.0)
   1.16.0

> Failed query shows warning that fragments has made no progress
> --
>
> Key: DRILL-7202
> URL: https://issues.apache.org/jira/browse/DRILL-7202
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
> Attachments: FailedQuery_NoProgressWarning_Repro_Attempt.png, 
> no_fragments_progress_warning.JPG
>
>
> Failed query shows warning that fragments has made no progress.
> Since query failed during planning stage and did not have any fragments, 
> looks strange to see such warning. Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-7192) Drill limits rows when autoLimit is disabled

2019-04-25 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reassigned DRILL-7192:
---

Assignee: Kunal Khatua

> Drill limits rows when autoLimit is disabled
> 
>
> Key: DRILL-7192
> URL: https://issues.apache.org/jira/browse/DRILL-7192
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: Future
>
>
> In DRILL-7048 was implemented autoLimit for JDBC and rest clients.
> *Steps to reproduce the issue:*
>  1. Check that autoLimit was disabled, if not, disable it and restart Drill.
>  2. Submit any query, and verify that rows count is correct, for example,
> {code:sql}
> SELECT * FROM cp.`employee.json`;
> {code}
> returns 1,155 rows
>  3. Enable autoLimit for sqlLine sqlLine client:
> {code:sql}
> !set rowLimit 10
> {code}
> 4. Submit the same query and verify that the result has 10 rows.
>  5. Disable autoLimit:
> {code:sql}
> !set rowLimit 0
> {code}
> 6. Submit the same query, but for this time, *it returns 10 rows instead of 
> 1,155*.
> Correct rows count is returned only after creating a new connection.
> The same issue is also observed for SQuirreL SQL client, but for example, for 
> Postgres, it works correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7192) Drill limits rows when autoLimit is disabled

2019-04-25 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7192:

Fix Version/s: (was: Future)
   1.17.0

> Drill limits rows when autoLimit is disabled
> 
>
> Key: DRILL-7192
> URL: https://issues.apache.org/jira/browse/DRILL-7192
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> In DRILL-7048 was implemented autoLimit for JDBC and rest clients.
> *Steps to reproduce the issue:*
>  1. Check that autoLimit was disabled, if not, disable it and restart Drill.
>  2. Submit any query, and verify that rows count is correct, for example,
> {code:sql}
> SELECT * FROM cp.`employee.json`;
> {code}
> returns 1,155 rows
>  3. Enable autoLimit for sqlLine sqlLine client:
> {code:sql}
> !set rowLimit 10
> {code}
> 4. Submit the same query and verify that the result has 10 rows.
>  5. Disable autoLimit:
> {code:sql}
> !set rowLimit 0
> {code}
> 6. Submit the same query, but for this time, *it returns 10 rows instead of 
> 1,155*.
> Correct rows count is returned only after creating a new connection.
> The same issue is also observed for SQuirreL SQL client, but for example, for 
> Postgres, it works correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7201) Strange symbols in error window (Windows)

2019-04-24 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825569#comment-16825569
 ] 

Kunal Khatua commented on DRILL-7201:
-

I suspect this could be a localization issue, because I'm unable to reproduce 
it. I'm assuming that the font file loaded correctly:

[http://localhost:8047/static/fonts/glyphicons-halflings-regular.woff]

so it is likely that the fonts are being switched. Can you use the Elements tab 
in Chrome's developer tools' and change the span's font-style to include

*{{font-family: 'Glyphicons Halflings'}}*

That said, I don't think this is a blocker, since these are only icons and the 
user experience is not adversely affected.

> Strange symbols in error window (Windows)
> -
>
> Key: DRILL-7201
> URL: https://issues.apache.org/jira/browse/DRILL-7201
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Blocker
> Fix For: 1.16.0
>
> Attachments: error_window.JPG, error_with_symbols.png, 
> image-2019-04-24-10-22-30-830.png, inspect-element-font.png
>
>
> Error window contains strange symbols on Windows, works fine on other OS. 
> Before we had alert instead which did not have such issue.
> Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7201) Strange symbols in error window (Windows)

2019-04-24 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825519#comment-16825519
 ] 

Kunal Khatua commented on DRILL-7201:
-

That is strange.

My Versions are:
* Chrome 
 (Before update) Version 73.0.3683.103 (Official Build) (64-bit)
 (After update) Version 74.0.3729.108 (Official Build) (64-bit)
* Edge Browser is 
 Microsoft Edge 42.17134.1.0 + Microsoft EdgeHTML 17.17134

Both render fine, so I'm thinking it might have to do with the fonts missing or 
being overridden. Can you do an _Inspect_ of that element to see what font it 
is using in Chrome?

> Strange symbols in error window (Windows)
> -
>
> Key: DRILL-7201
> URL: https://issues.apache.org/jira/browse/DRILL-7201
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Blocker
> Fix For: 1.16.0
>
> Attachments: error_window.JPG, error_with_symbols.png, 
> image-2019-04-24-10-22-30-830.png
>
>
> Error window contains strange symbols on Windows, works fine on other OS. 
> Before we had alert instead which did not have such issue.
> Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7209) Bundle parquet-tools Jar file with Apache Drill distribution

2019-04-24 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7209:
---

 Summary: Bundle parquet-tools Jar file with Apache Drill 
distribution
 Key: DRILL-7209
 URL: https://issues.apache.org/jira/browse/DRILL-7209
 Project: Apache Drill
  Issue Type: Wish
  Components: Metadata
Reporter: Kunal Khatua
 Fix For: 1.17.0


It would be nice to have the parquet-tools JAR as part of the distribution, so 
as to allow users to peek into the files' schema, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7203) Back button for failed query does not return on Query page

2019-04-24 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825378#comment-16825378
 ] 

Kunal Khatua commented on DRILL-7203:
-

Thanks for catching this [~arina].

It seems that the Query page is never changed on submission, despite the 
contents being updated to reflect the error page. So,when hitting the back 
button, it goes back to the actual relative previous page, which is the {{log}} 
page. 

>  Back button for failed query does not return on Query page
> ---
>
> Key: DRILL-7203
> URL: https://issues.apache.org/jira/browse/DRILL-7203
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
> Attachments: back_button.JPG
>
>
> Back button for failed query returns on previous page before Query page but 
> not on the Query page.
> Steps: 
> 1. go to Logs page
> 2. go to Query page
> 3. execute query with incorrect syntax (ex: x)
> 4. error message will be displayed, Back button will be in left corner 
> (screenshot attached)
> 5. press Back button
> 6. user is redirected to Logs page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7202) Failed query shows warning that fragments has made no progress

2019-04-24 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825376#comment-16825376
 ] 

Kunal Khatua commented on DRILL-7202:
-

[~arina]

 I don't see the issue with my failed queries. But the fix is trivial, since I 
can ensure that this applies only for {{queryState=RUNNING}} (and, may be 
{{queryState=STARTING}}) queries, which is when all the fragments are expected 
to have been spawned.

 !FailedQuery_NoProgressWarning_Repro_Attempt.png! 

> Failed query shows warning that fragments has made no progress
> --
>
> Key: DRILL-7202
> URL: https://issues.apache.org/jira/browse/DRILL-7202
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.17.0
>
> Attachments: FailedQuery_NoProgressWarning_Repro_Attempt.png, 
> no_fragments_progress_warning.JPG
>
>
> Failed query shows warning that fragments has made no progress.
> Since query failed during planning stage and did not have any fragments, 
> looks strange to see such warning. Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7202) Failed query shows warning that fragments has made no progress

2019-04-24 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7202:

Attachment: FailedQuery_NoProgressWarning_Repro_Attempt.png

> Failed query shows warning that fragments has made no progress
> --
>
> Key: DRILL-7202
> URL: https://issues.apache.org/jira/browse/DRILL-7202
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.17.0
>
> Attachments: FailedQuery_NoProgressWarning_Repro_Attempt.png, 
> no_fragments_progress_warning.JPG
>
>
> Failed query shows warning that fragments has made no progress.
> Since query failed during planning stage and did not have any fragments, 
> looks strange to see such warning. Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7201) Strange symbols in error window (Windows)

2019-04-24 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825361#comment-16825361
 ] 

Kunal Khatua commented on DRILL-7201:
-

[~arina]

This is working fine on my Windows 10 OS. The symbol is from the glyphicon 
package, so that should be rendered correctly. I'm suspecting that the ".WOFF" 
font file hasn't loaded. Can you check in the developer mode (e.g. Chrome) and 
see if there are any errors?

!image-2019-04-24-10-22-30-830.png!

> Strange symbols in error window (Windows)
> -
>
> Key: DRILL-7201
> URL: https://issues.apache.org/jira/browse/DRILL-7201
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Blocker
> Fix For: 1.16.0
>
> Attachments: error_window.JPG, image-2019-04-24-10-22-30-830.png
>
>
> Error window contains strange symbols on Windows, works fine on other OS. 
> Before we had alert instead which did not have such issue.
> Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7201) Strange symbols in error window (Windows)

2019-04-24 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7201:

Attachment: image-2019-04-24-10-22-30-830.png

> Strange symbols in error window (Windows)
> -
>
> Key: DRILL-7201
> URL: https://issues.apache.org/jira/browse/DRILL-7201
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Blocker
> Fix For: 1.16.0
>
> Attachments: error_window.JPG, image-2019-04-24-10-22-30-830.png
>
>
> Error window contains strange symbols on Windows, works fine on other OS. 
> Before we had alert instead which did not have such issue.
> Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-6879) Indicate a warning in the WebUI when a query makes little to no progress for a while

2019-04-18 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821517#comment-16821517
 ] 

Kunal Khatua edited comment on DRILL-6879 at 4/18/19 9:53 PM:
--

Hi Bridget,

The content looks good, but the table in 
[https://drill.apache.org/docs/query-profiles/#query-profile-warnings] is 
unusually squeezed for the *Icon* column.

Can you see if something like suffixing {{ }} to the *Icon* column header helps?

(Ref: https://stackoverflow.com/a/44555965)


was (Author: kkhatua):
Hi Bridget,

The content looks good, but the table in 
[https://drill.apache.org/docs/query-profiles/#query-profile-warnings] is 
unusually squeezed for the *Icon* column.

Can you see if something like suffixing {{}} to the *Icon* column header 
helps?

> Indicate a warning in the WebUI when a query makes little to no progress for 
> a while
> 
>
> Key: DRILL-6879
> URL: https://issues.apache.org/jira/browse/DRILL-6879
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring, Web Server
>Affects Versions: 1.14.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-complete, ready-to-commit
> Fix For: 1.16.0
>
> Attachments: image-2018-12-04-11-54-54-247.png, 
> image-2018-12-06-11-19-00-339.png, image-2018-12-06-11-27-14-719.png
>
>
> When running a very large query on a cluster with limited resource, we 
> noticed that one of the node's VM thread freezes the fragment threads as it 
> tries to do some work (GC perhaps?). This is a clear indication that the 
> query is stuck in a weird state where it might not recover from.
>  Under such circumstances, it makes sense to cancel or atleast warn the user 
> on that page of the query exceeding a certain threshold. 
>  For detecting this, the user will find that the {{Last Progress}} column in 
> the Fragments Overview section will show large times.
> !image-2018-12-04-11-54-54-247.png|width=969,height=336!
> In addition, there are instances where a query might have buffered operators 
> spilling to disk, which also hits performance (and, subsequently, longer run 
> times). Calling out this skew can be very useful.
> !image-2018-12-06-11-27-14-719.png|width=969,height=256!  
> Or there might be cases where a single fragment takes much longer than the 
> average (indicated by an extreme skew in the Gantt chart).
> !image-2018-12-06-11-19-00-339.png|width=969,height=150!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6879) Indicate a warning in the WebUI when a query makes little to no progress for a while

2019-04-18 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821517#comment-16821517
 ] 

Kunal Khatua commented on DRILL-6879:
-

Hi Bridget,

The content looks good, but the table in 
[https://drill.apache.org/docs/query-profiles/#query-profile-warnings] is 
unusually squeezed for the *Icon* column.

Can you see if something like suffixing {{}} to the *Icon* column header 
helps?

> Indicate a warning in the WebUI when a query makes little to no progress for 
> a while
> 
>
> Key: DRILL-6879
> URL: https://issues.apache.org/jira/browse/DRILL-6879
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring, Web Server
>Affects Versions: 1.14.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-complete, ready-to-commit
> Fix For: 1.16.0
>
> Attachments: image-2018-12-04-11-54-54-247.png, 
> image-2018-12-06-11-19-00-339.png, image-2018-12-06-11-27-14-719.png
>
>
> When running a very large query on a cluster with limited resource, we 
> noticed that one of the node's VM thread freezes the fragment threads as it 
> tries to do some work (GC perhaps?). This is a clear indication that the 
> query is stuck in a weird state where it might not recover from.
>  Under such circumstances, it makes sense to cancel or atleast warn the user 
> on that page of the query exceeding a certain threshold. 
>  For detecting this, the user will find that the {{Last Progress}} column in 
> the Fragments Overview section will show large times.
> !image-2018-12-04-11-54-54-247.png|width=969,height=336!
> In addition, there are instances where a query might have buffered operators 
> spilling to disk, which also hits performance (and, subsequently, longer run 
> times). Calling out this skew can be very useful.
> !image-2018-12-06-11-27-14-719.png|width=969,height=256!  
> Or there might be cases where a single fragment takes much longer than the 
> average (indicated by an extreme skew in the Gantt chart).
> !image-2018-12-06-11-19-00-339.png|width=969,height=150!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6939) Indicate when a query is submitted and is in progress

2019-04-18 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821511#comment-16821511
 ] 

Kunal Khatua commented on DRILL-6939:
-

LGTM. THanks!

> Indicate when a query is submitted and is in progress
> -
>
> Key: DRILL-6939
> URL: https://issues.apache.org/jira/browse/DRILL-6939
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.14.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Critical
>  Labels: doc-complete, ready-to-commit, user-experience
> Fix For: 1.16.0
>
>
> When submitting a long running query, the web UI shows no indication of the 
> query having been submitted. What is needed is some form of UI enhancement 
> that shows that the submitted query is in progress and the results will load 
> when available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6921) Add button to reset Options filter

2019-04-18 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821498#comment-16821498
 ] 

Kunal Khatua commented on DRILL-6921:
-

LGTM. Thanks!

> Add button to reset Options filter
> --
>
> Key: DRILL-6921
> URL: https://issues.apache.org/jira/browse/DRILL-6921
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-complete, ready-to-commit
> Fix For: 1.16.0
>
>
> Currently we have ability to search options or use quick filter on Web UI. To 
> reset the filter, user needs to delete input from the search pane manually. 
> It would be nice if we had Reset button.
>  Also we can consider leaving filter after option update / reset rather then 
> reloading page without filtering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-6050) Provide a limit to number of rows fetched for a query in UI

2019-04-18 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821470#comment-16821470
 ] 

Kunal Khatua edited comment on DRILL-6050 at 4/18/19 9:23 PM:
--

All looks good except the 3rd link.

[https://drill.apache.org/docs/rest-api-introduction/#post-query-json]

The *Request Body* example is missing the autolimit key-value. This is, 
however, normal since the parameter is optional. 

Do you want to call it out specifically?


was (Author: kkhatua):
All looks good except the 3rd link.

[https://drill.apache.org/docs/rest-api-introduction/#post-query-json]

The *Request Body* example is missing the autolimit key-value. This is, 
however, normal. 

> Provide a limit to number of rows fetched for a query in UI
> ---
>
> Key: DRILL-6050
> URL: https://issues.apache.org/jira/browse/DRILL-6050
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: doc-complete, ready-to-commit, user-experience
> Fix For: 1.16.0, 1.17.0
>
>
> Currently, the WebServer side needs to process the entire set of results and 
> stream it back to the WebClient. 
> Since the WebUI does paginate results, we can load a larger set for 
> pagination on the browser client and relieve pressure off the WebServer to 
> host all the data.
> e.g. Fetching all rows from a 1Billion records table is impractical and can 
> be capped at 10K. Currently, the user has to explicitly specify LIMIT in the 
> submitted query. 
> An option can be provided in the field to allow for this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6050) Provide a limit to number of rows fetched for a query in UI

2019-04-18 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821470#comment-16821470
 ] 

Kunal Khatua commented on DRILL-6050:
-

All looks good except the 3rd link.

[https://drill.apache.org/docs/rest-api-introduction/#post-query-json]

The *Request Body* example is missing the autolimit key-value. This is, 
however, normal. 

> Provide a limit to number of rows fetched for a query in UI
> ---
>
> Key: DRILL-6050
> URL: https://issues.apache.org/jira/browse/DRILL-6050
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: doc-complete, ready-to-commit, user-experience
> Fix For: 1.16.0, 1.17.0
>
>
> Currently, the WebServer side needs to process the entire set of results and 
> stream it back to the WebClient. 
> Since the WebUI does paginate results, we can load a larger set for 
> pagination on the browser client and relieve pressure off the WebServer to 
> host all the data.
> e.g. Fetching all rows from a 1Billion records table is impractical and can 
> be capped at 10K. Currently, the user has to explicitly specify LIMIT in the 
> submitted query. 
> An option can be provided in the field to allow for this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7110) Skip writing profile when an ALTER SESSION is executed

2019-04-16 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819273#comment-16819273
 ] 

Kunal Khatua commented on DRILL-7110:
-

[~bbevens] LGTM. Thanks!

> Skip writing profile when an ALTER SESSION is executed
> --
>
> Key: DRILL-7110
> URL: https://issues.apache.org/jira/browse/DRILL-7110
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: doc-complete, ready-to-commit
> Fix For: 1.16.0
>
>
> Currently, any {{ALTER }} query will be logged. While this is useful, 
> it can potentially add up to a lot of profiles being written unnecessarily, 
> since those changes are also reflected on the queries that follow.
> This JIRA is proposing an option to skip writing such profiles to the profile 
> store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7172) README files for steps describing building C++ Drill client (with protobuf) needs to be updated

2019-04-11 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7172:

Affects Version/s: 1.16.0

> README files for steps describing building C++ Drill client (with protobuf) 
> needs to be updated
> ---
>
> Key: DRILL-7172
> URL: https://issues.apache.org/jira/browse/DRILL-7172
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Denys Ordynskiy
>Priority: Major
>
> During the 1.16.0 release, it was noticed that the steps (primarily library 
> versions) for rebuilding with protobuf-3.6.1 was outdated. 
> e.g. the Boost library version for building is reported as 1.53, where as 
> 1.60 in another place. The steps worked on an Ubuntu setup, but failed for 
> CentOS 7x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7172) README files for steps describing building C++ Drill client (with protobuf) needs to be updated

2019-04-11 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7172:

Fix Version/s: 1.17.0

> README files for steps describing building C++ Drill client (with protobuf) 
> needs to be updated
> ---
>
> Key: DRILL-7172
> URL: https://issues.apache.org/jira/browse/DRILL-7172
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Denys Ordynskiy
>Priority: Major
> Fix For: 1.17.0
>
>
> During the 1.16.0 release, it was noticed that the steps (primarily library 
> versions) for rebuilding with protobuf-3.6.1 was outdated. 
> e.g. the Boost library version for building is reported as 1.53, where as 
> 1.60 in another place. The steps worked on an Ubuntu setup, but failed for 
> CentOS 7x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7172) README files for steps describing building C++ Drill client (with protobuf) needs to be updated

2019-04-11 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7172:
---

 Summary: README files for steps describing building C++ Drill 
client (with protobuf) needs to be updated
 Key: DRILL-7172
 URL: https://issues.apache.org/jira/browse/DRILL-7172
 Project: Apache Drill
  Issue Type: Task
Reporter: Kunal Khatua
Assignee: Denys Ordynskiy


During the 1.16.0 release, it was noticed that the steps (primarily library 
versions) for rebuilding with protobuf-3.6.1 was outdated. 

e.g. the Boost library version for building is reported as 1.53, where as 1.60 
in another place. The steps worked on an Ubuntu setup, but failed for CentOS 7x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7160) exec.query.max_rows QUERY-level options are shown on Profiles tab

2019-04-08 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7160:

Reviewer: Volodymyr Vysotskyi

> exec.query.max_rows QUERY-level options are shown on Profiles tab
> -
>
> Key: DRILL-7160
> URL: https://issues.apache.org/jira/browse/DRILL-7160
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Blocker
> Fix For: 1.16.0
>
>
> As [~arina] has noticed, option {{exec.query.max_rows}} is shown on Web UI's 
> Profiles even when it was not set explicitly. The issue is because the option 
> is being set on the query level internally.
> From the code, looks like it is set in 
> {{DrillSqlWorker.checkAndApplyAutoLimit()}}, and perhaps a check whether the 
> value differs from the existing one should be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7136) Num_buckets for HashAgg in profile may be inaccurate

2019-04-08 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7136:

Fix Version/s: (was: 1.16.0)
   1.17.0

> Num_buckets for HashAgg in profile may be inaccurate
> 
>
> Key: DRILL-7136
> URL: https://issues.apache.org/jira/browse/DRILL-7136
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Affects Versions: 1.16.0
>Reporter: Robert Hou
>Assignee: Boaz Ben-Zvi
>Priority: Major
> Fix For: 1.17.0
>
> Attachments: 23650ee5-6721-8a8f-7dd3-f5dd09a3a7b0.sys.drill
>
>
> I ran TPCH query 17 with sf 1000.  Here is the query:
> {noformat}
> select
>   sum(l.l_extendedprice) / 7.0 as avg_yearly
> from
>   lineitem l,
>   part p
> where
>   p.p_partkey = l.l_partkey
>   and p.p_brand = 'Brand#13'
>   and p.p_container = 'JUMBO CAN'
>   and l.l_quantity < (
> select
>   0.2 * avg(l2.l_quantity)
> from
>   lineitem l2
> where
>   l2.l_partkey = p.p_partkey
>   );
> {noformat}
> One of the hash agg operators has resized 6 times.  It should have 4M 
> buckets.  But the profile shows it has 64K buckets.
> I have attached a sample profile.  In this profile, the hash agg operator is 
> (04-02).
> {noformat}
> Operator Metrics
> Minor FragmentNUM_BUCKETS NUM_ENTRIES NUM_RESIZING
> RESIZING_TIME_MSNUM_PARTITIONS  SPILLED_PARTITIONS  SPILL_MB  
>   SPILL_CYCLE INPUT_BATCH_COUNT   AVG_INPUT_BATCH_BYTES   
> AVG_INPUT_ROW_BYTES INPUT_RECORD_COUNT  OUTPUT_BATCH_COUNT  
> AVG_OUTPUT_BATCH_BYTES  AVG_OUTPUT_ROW_BYTESOUTPUT_RECORD_COUNT
> 04-00-02  65,536 748,746  6   364 1   
> 582 0   813 582,653 18  26,316,456  401 1,631,943 
>   25  26,176,350
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7160) exec.query.max_rows QUERY-level options are shown on Profiles tab

2019-04-08 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7160:

Fix Version/s: (was: 1.16.0)
   1.17.0

> exec.query.max_rows QUERY-level options are shown on Profiles tab
> -
>
> Key: DRILL-7160
> URL: https://issues.apache.org/jira/browse/DRILL-7160
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> As [~arina] has noticed, option {{exec.query.max_rows}} is shown on Web UI's 
> Profiles even when it was not set explicitly. The issue is because the option 
> is being set on the query level internally.
> From the code, looks like it is set in 
> {{DrillSqlWorker.checkAndApplyAutoLimit()}}, and perhaps a check whether the 
> value differs from the existing one should be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7062) Run-time row group pruning

2019-04-08 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7062:

Reviewer: Aman Sinha

> Run-time row group pruning
> --
>
> Key: DRILL-7062
> URL: https://issues.apache.org/jira/browse/DRILL-7062
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Boaz Ben-Zvi
>Priority: Major
> Fix For: 1.16.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7048) Implement JDBC Statement.setMaxRows() with System Option

2019-04-04 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7048:

Affects Version/s: (was: 1.15.0)
   1.16.0

> Implement JDBC Statement.setMaxRows() with System Option
> 
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.16.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-6960) Auto Limit Wrapping should not apply to non-select query

2019-04-04 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua resolved DRILL-6960.
-
   Resolution: Duplicate
Fix Version/s: (was: 1.17.0)
   1.16.0

Marked as \{{Duplicate}} since the issue is resolved with PR for DRILL-7048, 
which is a superset of this.

> Auto Limit Wrapping should not apply to non-select query
> 
>
> Key: DRILL-6960
> URL: https://issues.apache.org/jira/browse/DRILL-6960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Blocker
>  Labels: doc-impacting, user-experience
> Fix For: 1.16.0
>
>
> [~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries 
> with incorrect syntax. 
> For example, when user enters {{SHOW DATABASES}}' and after limitation 
> wrapping {{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
> This results into parsing errors, like:
> {{Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "( show" at line 2, column 15. Was expecting one of:  
> ... }}.
> The fix should involve a javascript check for all non-select queries and not 
> apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7048) Implement JDBC Statement.setMaxRows() with System Option

2019-04-04 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7048:

Fix Version/s: (was: 1.17.0)
   1.16.0

> Implement JDBC Statement.setMaxRows() with System Option
> 
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.16.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6050) Provide a limit to number of rows fetched for a query in UI

2019-04-01 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6050:

Labels: doc-impacting ready-to-commit user-experience  (was: 
ready-to-commit user-experience)

> Provide a limit to number of rows fetched for a query in UI
> ---
>
> Key: DRILL-6050
> URL: https://issues.apache.org/jira/browse/DRILL-6050
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: doc-impacting, ready-to-commit, user-experience
> Fix For: 1.16.0, 1.17.0
>
>
> Currently, the WebServer side needs to process the entire set of results and 
> stream it back to the WebClient. 
> Since the WebUI does paginate results, we can load a larger set for 
> pagination on the browser client and relieve pressure off the WebServer to 
> host all the data.
> e.g. Fetching all rows from a 1Billion records table is impractical and can 
> be capped at 10K. Currently, the user has to explicitly specify LIMIT in the 
> submitted query. 
> An option can be provided in the field to allow for this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6939) Indicate when a query is submitted and is in progress

2019-04-01 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6939:

Labels: doc-impacting ready-to-commit user-experience  (was: 
ready-to-commit user-experience)

> Indicate when a query is submitted and is in progress
> -
>
> Key: DRILL-6939
> URL: https://issues.apache.org/jira/browse/DRILL-6939
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.14.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Critical
>  Labels: doc-impacting, ready-to-commit, user-experience
> Fix For: 1.16.0
>
>
> When submitting a long running query, the web UI shows no indication of the 
> query having been submitted. What is needed is some form of UI enhancement 
> that shows that the submitted query is in progress and the results will load 
> when available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7032) Ignore corrupt rows in a PCAP file

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807290#comment-16807290
 ] 

Kunal Khatua commented on DRILL-7032:
-

[~cgivre]  does this require any additional Documentation beyond a mention in 
the release notes? (cc: [~bbevens])

> Ignore corrupt rows in a PCAP file
> --
>
> Key: DRILL-7032
> URL: https://issues.apache.org/jira/browse/DRILL-7032
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.15.0
> Environment: OS: Ubuntu 18.4
> Drill version: 1.15.0
> Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
>Reporter: Giovanni Conte
>Assignee: Charles Givre
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Would be useful for Drill to have some ability to ignore corrupt rows in a 
> PCAP file instead of trow the java exception.
> This is because there are many pcap files with corrupted lines and this 
> funcionality will avoid to do a pre-fixing of the packet-captures (example 
> attached file).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7054) PCAP timestamp in milliseconds

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807288#comment-16807288
 ] 

Kunal Khatua commented on DRILL-7054:
-

[~manang] / [~cgivre] does this require any additional Documentation beyond a 
mention in the release notes? (cc: [~bbevens])

> PCAP timestamp in milliseconds
> --
>
> Key: DRILL-7054
> URL: https://issues.apache.org/jira/browse/DRILL-7054
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types, Storage - Other
>Reporter: Angelo Mantellini
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> It is important to show the timestamp with microseconds precision.
> timestamp has milliseconds as precision and in some case, it could be not 
> enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7060) Support JsonParser Feature 'ALLOW_BACKSLASH_ESCAPING_ANY_CHARACTER' in JsonReader

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807287#comment-16807287
 ] 

Kunal Khatua commented on DRILL-7060:
-

[~agirish] does this require any additional Documentation beyond a mention in 
the release notes? (cc: [~bbevens])

> Support JsonParser Feature 'ALLOW_BACKSLASH_ESCAPING_ANY_CHARACTER' in 
> JsonReader
> -
>
> Key: DRILL-7060
> URL: https://issues.apache.org/jira/browse/DRILL-7060
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - JSON
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Abhishek Girish
>Assignee: Abhishek Girish
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Some JSON files may have strings with backslashes - which are read as escape 
> characters. By default only standard escape characters are allowed. So 
> querying such files would fail. For example see:
> Data
> {code}
> {"file":"C:\Sfiles\escape.json"}
> {code}
> Error
> {code}
> (com.fasterxml.jackson.core.JsonParseException) Unrecognized character escape 
> 'S' (code 83)
>  at [Source: (org.apache.drill.exec.store.dfs.DrillFSDataInputStream); line: 
> 1, column: 178]
> com.fasterxml.jackson.core.JsonParser._constructError():1804
> com.fasterxml.jackson.core.base.ParserMinimalBase._reportError():663
> 
> com.fasterxml.jackson.core.base.ParserMinimalBase._handleUnrecognizedCharacterEscape():640
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser._decodeEscaped():3243
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipString():2537
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken():683
> org.apache.drill.exec.vector.complex.fn.JsonReader.writeData():342
> org.apache.drill.exec.vector.complex.fn.JsonReader.writeDataSwitch():298
> org.apache.drill.exec.vector.complex.fn.JsonReader.writeToVector():246
> org.apache.drill.exec.vector.complex.fn.JsonReader.write():205
> org.apache.drill.exec.store.easy.json.JSONRecordReader.next():216
> org.apache.drill.exec.physical.impl.ScanBatch.internalNext():223
> ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7069) Poor performance of transformBinaryInMetadataCache

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807286#comment-16807286
 ] 

Kunal Khatua commented on DRILL-7069:
-

[~ben-zvi] does this require any additional Documentation beyond a mention in 
the release notes? (cc: [~bbevens])

> Poor performance of transformBinaryInMetadataCache
> --
>
> Key: DRILL-7069
> URL: https://issues.apache.org/jira/browse/DRILL-7069
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Metadata
>Affects Versions: 1.15.0
>Reporter: Boaz Ben-Zvi
>Assignee: Boaz Ben-Zvi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> The performance of the method *transformBinaryInMetadataCache* scales poorly 
> as the table's numbers of underlying files, row-groups and columns grow. This 
> method is invoked during planning of every query using this table.
>      A test on a table using 219 directories (each with 20 files), 1 
> row-group in each file, and 94 columns, measured about *1340 milliseconds*.
>     The main culprit are the version checks, which take place in *every 
> iteration* (i.e., about 400k times in the previous example) and involve 
> construction of 6 MetadataVersion objects (and possibly garbage collections).
>      Removing the version checks from the loops improved this method's 
> performance on the above test down to about *250 milliseconds*.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7072) Query with semi join fails for JDBC storage plugin

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807284#comment-16807284
 ] 

Kunal Khatua commented on DRILL-7072:
-

[~vvysotskyi] does this require any Documentation? (cc: [~bbevens])

> Query with semi join fails for JDBC storage plugin
> --
>
> Key: DRILL-7072
> URL: https://issues.apache.org/jira/browse/DRILL-7072
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - JDBC
>Affects Versions: 1.15.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> When running a query with semi join for JDBC storage plugin, it fails with 
> class cast exception:
> {code:sql}
> select person_id from mysql.`drill_mysql_test`.person t1
> where exists (
> select person_id from mysql.`drill_mysql_test`.person
> where t1.person_id = person_id)
> {code}
> {noformat}
> SYSTEM ERROR: ClassCastException: 
> org.apache.calcite.adapter.jdbc.JdbcRules$JdbcAggregate cannot be cast to 
> org.apache.drill.exec.planner.logical.DrillAggregateRel
> Please, refer to logs for more information.
> [Error Id: 85a27762-a4e5-4571-909f-0efa18ca0689 on user515050-pc:31013]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> ClassCastException: org.apache.calcite.adapter.jdbc.JdbcRules$JdbcAggregate 
> cannot be cast to org.apache.drill.exec.planner.logical.DrillAggregateRel
> Please, refer to logs for more information.
> [Error Id: 85a27762-a4e5-4571-909f-0efa18ca0689 on user515050-pc:31013]
>   at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:779)
>  [classes/:na]
>   at 
> org.apache.drill.exec.work.foreman.QueryStateProcessor.checkCommonStates(QueryStateProcessor.java:325)
>  [classes/:na]
>   at 
> org.apache.drill.exec.work.foreman.QueryStateProcessor.planning(QueryStateProcessor.java:221)
>  [classes/:na]
>   at 
> org.apache.drill.exec.work.foreman.QueryStateProcessor.moveToState(QueryStateProcessor.java:83)
>  [classes/:na]
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:299) 
> [classes/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_191]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_191]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
> Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
> exception during fragment initialization: 
> org.apache.calcite.adapter.jdbc.JdbcRules$JdbcAggregate cannot be cast to 
> org.apache.drill.exec.planner.logical.DrillAggregateRel
>   at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:300) 
> [classes/:na]
>   ... 3 common frames omitted
> Caused by: java.lang.ClassCastException: 
> org.apache.calcite.adapter.jdbc.JdbcRules$JdbcAggregate cannot be cast to 
> org.apache.drill.exec.planner.logical.DrillAggregateRel
>   at 
> org.apache.drill.exec.planner.logical.DrillSemiJoinRule.matches(DrillSemiJoinRule.java:171)
>  ~[classes/:na]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:557) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:420) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:257)
>  ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>  ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:216) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:203) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
>   at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:431)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:382)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:365)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRawDrel(DefaultSqlHandler.java:289)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:331)
>  ~[classes/:na]

[jira] [Commented] (DRILL-7107) Unable to connect to Drill 1.15 through ZK

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807282#comment-16807282
 ] 

Kunal Khatua commented on DRILL-7107:
-

[~karthikm]does this require any Documentation? (cc: [~bbevens])

> Unable to connect to Drill 1.15 through ZK
> --
>
> Key: DRILL-7107
> URL: https://issues.apache.org/jira/browse/DRILL-7107
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.15.0
>Reporter: Karthikeyan Manivannan
>Assignee: Karthikeyan Manivannan
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> After upgrading to Drill 1.15, users are seeing they are no longer able to 
> connect to Drill using ZK quorum. They are getting the following "Unable to 
> setup ZK for client" error.
> [~]$ sqlline -u "jdbc:drill:zk=172.16.2.165:5181;auth=maprsasl"
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. 
> (state=,code=0)
> java.sql.SQLNonTransientConnectionException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
>  at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:174)
>  at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
>  at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
>  at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>  at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
>  at sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
>  at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
>  at sqlline.Commands.connect(Commands.java:1247)
>  at sqlline.Commands.connect(Commands.java:1139)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>  at sqlline.SqlLine.dispatch(SqlLine.java:722)
>  at sqlline.SqlLine.initArgs(SqlLine.java:416)
>  at sqlline.SqlLine.begin(SqlLine.java:514)
>  at sqlline.SqlLine.start(SqlLine.java:264)
>  at sqlline.SqlLine.main(SqlLine.java:195)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> client.
>  at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:340)
>  at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:165)
>  ... 18 more
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.drill.exec.coord.zk.ZKACLProviderFactory.findACLProvider(ZKACLProviderFactory.java:68)
>  at 
> org.apache.drill.exec.coord.zk.ZKACLProviderFactory.getACLProvider(ZKACLProviderFactory.java:47)
>  at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:114)
>  at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:86)
>  at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:337)
>  ... 19 more
> Apache Drill 1.15.0.0
> "This isn't your grandfather's SQL."
> sqlline>
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7125) REFRESH TABLE METADATA fails after upgrade from Drill 1.13.0 to Drill 1.15.0

2019-04-01 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807281#comment-16807281
 ] 

Kunal Khatua commented on DRILL-7125:
-

[~shamirwasia] does this require any Documentation? (cc: [~bbevens])

> REFRESH TABLE METADATA fails after upgrade from Drill 1.13.0 to Drill 1.15.0
> 
>
> Key: DRILL-7125
> URL: https://issues.apache.org/jira/browse/DRILL-7125
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> REFRESH TABLE METADATA command worked successfully on Drill 1.13.0, however 
> after upgrade Drill 1.15.0 there are errors sometime.
> {code:java}
> In sqlline logging in as regular user "alice" or Drill process user "admin" 
> gives the same error (permission denied)
> If this helps, here's also what I am seeing on sqlline
> Error message contains random but valid user's names other than the user 
> (Alice) that logged in to refresh the metadata. Looks like during the refresh 
> metadata drillbits seems to incorrectly try the metadata generation as some 
> random user which obviously does not have write access
> 2019-03-12 15:27:20,564 [2377cdd9-dd6e-d213-de1a-70b50d3641d7:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 2377cdd9-dd6e-d213-de1a-70b50d3641d7:0:0: State change requested RUNNING --> 
> FINISHED
> 2019-03-12 15:27:20,564 [2377cdd9-dd6e-d213-de1a-70b50d3641d7:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 2377cdd9-dd6e-d213-de1a-70b50d3641d7:0:0: State to report: FINISHED
> 2019-03-12 15:27:23,032 [2377cdb3-86cc-438d-8ada-787d2a84df9a:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 2377cdb3-86cc-438d-8ada-787d2a84df9a issued by alice: REFRESH TABLE METADATA 
> dfs.root.`/user/alice/logs/hive/warehouse/detail`
> 2019-03-12 15:27:23,350 [2377cdb3-86cc-438d-8ada-787d2a84df9a:foreman] ERROR 
> o.a.d.e.s.parquet.metadata.Metadata - Failed to read 
> 'file://user/alice/logs/hive/warehouse/detail/.drill.parquet_metadata_directories'
>  metadata file
> java.io.IOException: 2879.5854742.1036302960 
> /user/alice/logs/hive/warehouse/detail/file1/.drill.parquet_metadata 
> (Permission denied)
> at com.mapr.fs.Inode.throwIfFailed(Inode.java:390) 
> ~[maprfs-6.1.0-mapr.jar:na]
> at com.mapr.fs.Inode.flushPages(Inode.java:505) 
> ~[maprfs-6.1.0-mapr.jar:na]
> at com.mapr.fs.Inode.releaseDirty(Inode.java:583) 
> ~[maprfs-6.1.0-mapr.jar:na]
> at 
> com.mapr.fs.MapRFsOutStream.dropCurrentPage(MapRFsOutStream.java:73) 
> ~[maprfs-6.1.0-mapr.jar:na]
> at com.mapr.fs.MapRFsOutStream.write(MapRFsOutStream.java:85) 
> ~[maprfs-6.1.0-mapr.jar:na]
> at 
> com.mapr.fs.MapRFsDataOutputStream.write(MapRFsDataOutputStream.java:39) 
> ~[maprfs-6.1.0-mapr.jar:na]
> at 
> com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2085)
>  ~[jackson-core-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1097)
>  ~[jackson-core-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2645)
>  ~[jackson-databind-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:381)
>  ~[jackson-core-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1726)
>  ~[jackson-core-2.9.5.jar:2.9.5]
> at 
> org.apache.drill.exec.store.parquet.metadata.Metadata_V3$ColumnMetadata_v3$Serializer.serialize(Metadata_V3.java:448)
>  ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at 
> org.apache.drill.exec.store.parquet.metadata.Metadata_V3$ColumnMetadata_v3$Serializer.serialize(Metadata_V3.java:417)
>  ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at 
> com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:119)
>  ~[jackson-databind-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:79)
>  ~[jackson-databind-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:18)
>  ~[jackson-databind-2.9.5.jar:2.9.5]
> at 
> com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:727)
>  ~[jackson-databind-2.9.5.jar:2.9.5]
> at 
> 

[jira] [Updated] (DRILL-7141) Hash-Join (and Agg) should always spill to disk the least used partition

2019-04-01 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7141:

Fix Version/s: (was: Future)
   1.17.0

> Hash-Join (and Agg) should always spill to disk the least used partition
> 
>
> Key: DRILL-7141
> URL: https://issues.apache.org/jira/browse/DRILL-7141
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Boaz Ben-Zvi
>Priority: Major
> Fix For: 1.17.0
>
>
> When the probe-side data for a hash join is skewed, it is preferable to have 
> the corresponding partition on the build side to be in memory. 
> Currently, with the spill-to-disk feature, the partition selected for 
> spilling to disk is done at random. This means that a highly skewed 
> probe-side data would also spill for lack of a corresponding hash table 
> partition in memory. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6400) Hash-Aggr: Avoid recreating common Hash-Table setups for every partition

2019-03-29 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6400:

Fix Version/s: Future

> Hash-Aggr: Avoid recreating common Hash-Table setups for every partition
> 
>
> Key: DRILL-6400
> URL: https://issues.apache.org/jira/browse/DRILL-6400
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
>Affects Versions: 1.13.0
>Reporter: Boaz Ben-Zvi
>Assignee: Boaz Ben-Zvi
>Priority: Minor
> Fix For: Future
>
>
>  The current Hash-Aggr code (and soon the Hash-Join code) creates multiple 
> partitions to hold the incoming data; each partition with its own HashTable. 
>      The current code invokes the HashTable method 
> _createAndSetupHashTable()_ for *each* partition. But most of the setups done 
> by this method are identical for all the partitions (e.g., code generation).  
> Calling this method has a performance cost (some local tests measured between 
> 3 - 30 milliseconds, depends on the key columns).
>   Suggested performance improvement: Extract the common settings to be called 
> *once*, and use the results later by all the partitions. When running with 
> the default 32 partitions, this can have a measurable improvement (and if 
> spilling, this method is used again).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7141) Hash-Join (and Agg) should always spill to disk the least used partition

2019-03-29 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7141:
---

 Summary: Hash-Join (and Agg) should always spill to disk the least 
used partition
 Key: DRILL-7141
 URL: https://issues.apache.org/jira/browse/DRILL-7141
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators
Affects Versions: 1.15.0
Reporter: Kunal Khatua
Assignee: Boaz Ben-Zvi
 Fix For: Future


When the probe-side data for a hash join is skewed, it is preferable to have 
the corresponding partition on the build side to be in memory. 

Currently, with the spill-to-disk feature, the partition selected for spilling 
to disk is done at random. This means that a highly skewed probe-side data 
would also spill for lack of a corresponding hash table partition in memory. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7048) Implement JDBC Statement.setMaxRows() with System Option

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7048:

Labels: doc-impacting  (was: )

> Implement JDBC Statement.setMaxRows() with System Option
> 
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-impacting
> Fix For: 1.17.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7048) Implement JDBC Statement.setMaxRows() with System Option

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7048:

Fix Version/s: (was: 1.16.0)
   1.17.0

> Implement JDBC Statement.setMaxRows() with System Option
> 
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6050) Provide a limit to number of rows fetched for a query in UI

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6050:

Fix Version/s: 1.17.0

> Provide a limit to number of rows fetched for a query in UI
> ---
>
> Key: DRILL-6050
> URL: https://issues.apache.org/jira/browse/DRILL-6050
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: ready-to-commit, user-experience
> Fix For: 1.16.0, 1.17.0
>
>
> Currently, the WebServer side needs to process the entire set of results and 
> stream it back to the WebClient. 
> Since the WebUI does paginate results, we can load a larger set for 
> pagination on the browser client and relieve pressure off the WebServer to 
> host all the data.
> e.g. Fetching all rows from a 1Billion records table is impractical and can 
> be capped at 10K. Currently, the user has to explicitly specify LIMIT in the 
> submitted query. 
> An option can be provided in the field to allow for this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6050) Provide a limit to number of rows fetched for a query in UI

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6050:

Labels: ready-to-commit user-experience  (was: doc-impacting 
ready-to-commit user-experience)

> Provide a limit to number of rows fetched for a query in UI
> ---
>
> Key: DRILL-6050
> URL: https://issues.apache.org/jira/browse/DRILL-6050
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: ready-to-commit, user-experience
> Fix For: 1.16.0
>
>
> Currently, the WebServer side needs to process the entire set of results and 
> stream it back to the WebClient. 
> Since the WebUI does paginate results, we can load a larger set for 
> pagination on the browser client and relieve pressure off the WebServer to 
> host all the data.
> e.g. Fetching all rows from a 1Billion records table is impractical and can 
> be capped at 10K. Currently, the user has to explicitly specify LIMIT in the 
> submitted query. 
> An option can be provided in the field to allow for this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6960) Auto Limit Wrapping should not apply to non-select query

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6960:

Labels: doc-impacting user-experience  (was: user-experience)

> Auto Limit Wrapping should not apply to non-select query
> 
>
> Key: DRILL-6960
> URL: https://issues.apache.org/jira/browse/DRILL-6960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Blocker
>  Labels: doc-impacting, user-experience
> Fix For: 1.17.0
>
>
> [~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries 
> with incorrect syntax. 
> For example, when user enters {{SHOW DATABASES}}' and after limitation 
> wrapping {{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
> This results into parsing errors, like:
> {{Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "( show" at line 2, column 15. Was expecting one of:  
> ... }}.
> The fix should involve a javascript check for all non-select queries and not 
> apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7110) Skip writing profile when an ALTER SESSION is executed

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7110:

Fix Version/s: (was: 1.17.0)
   1.16.0

> Skip writing profile when an ALTER SESSION is executed
> --
>
> Key: DRILL-7110
> URL: https://issues.apache.org/jira/browse/DRILL-7110
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.16.0
>
>
> Currently, any {{ALTER }} query will be logged. While this is useful, 
> it can potentially add up to a lot of profiles being written unnecessarily, 
> since those changes are also reflected on the queries that follow.
> This JIRA is proposing an option to skip writing such profiles to the profile 
> store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7110) Skip writing profile when an ALTER SESSION is executed

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7110:

Reviewer: Arina Ielchiieva

> Skip writing profile when an ALTER SESSION is executed
> --
>
> Key: DRILL-7110
> URL: https://issues.apache.org/jira/browse/DRILL-7110
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Monitoring
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.16.0
>
>
> Currently, any {{ALTER }} query will be logged. While this is useful, 
> it can potentially add up to a lot of profiles being written unnecessarily, 
> since those changes are also reflected on the queries that follow.
> This JIRA is proposing an option to skip writing such profiles to the profile 
> store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5028) Opening profiles page from web ui gets very slow when a lot of history files have been stored in HDFS or Local FS.

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-5028:

Fix Version/s: (was: 1.16.0)
   1.17.0

> Opening profiles page from web ui gets very slow when a lot of history files 
> have been stored in HDFS or Local FS.
> --
>
> Key: DRILL-5028
> URL: https://issues.apache.org/jira/browse/DRILL-5028
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.8.0
>Reporter: Account Not Used
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.17.0
>
>
> We have a Drill cluster with 20+ Nodes and we store all history profiles in 
> hdfs. Without doing periodically cleans for hdfs, the profiles page gets 
> slower while serving more queries.
> Code from LocalPersistentStore.java uses fs.list(false, basePath) for 
> fetching the latest 100 history profiles by default, I guess this operation 
> blocks the page loading (Millions small files can be stored in the basePath), 
> maybe we can try some other ways to reach the same goal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6960) Auto Limit Wrapping should not apply to non-select query

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6960:

Fix Version/s: (was: 1.16.0)
   1.17.0

> Auto Limit Wrapping should not apply to non-select query
> 
>
> Key: DRILL-6960
> URL: https://issues.apache.org/jira/browse/DRILL-6960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Blocker
>  Labels: user-experience
> Fix For: 1.17.0
>
>
> [~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries 
> with incorrect syntax. 
> For example, when user enters {{SHOW DATABASES}}' and after limitation 
> wrapping {{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
> This results into parsing errors, like:
> {{Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "( show" at line 2, column 15. Was expecting one of:  
> ... }}.
> The fix should involve a javascript check for all non-select queries and not 
> apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5270) Improve loading of profiles listing in the WebUI

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-5270:

Fix Version/s: (was: 1.16.0)
   1.17.0

> Improve loading of profiles listing in the WebUI
> 
>
> Key: DRILL-5270
> URL: https://issues.apache.org/jira/browse/DRILL-5270
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.9.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> Currently, as the number of profiles increase, we reload the same list of 
> profiles from the FS.
> An ideal improvement would be to detect if there are any new profiles and 
> only reload from the disk then. Otherwise, a cached list is sufficient.
> For a directory of 280K profiles, the load time is close to 6 seconds on a 32 
> core server. With the caching, we can get it down to as much as a few 
> milliseconds.
> To render the cache as invalid, we inspect the last modified time of the 
> directory to confirm whether a reload is needed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-2362) Drill should manage Query Profiling archiving

2019-03-19 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-2362:

Fix Version/s: (was: 1.16.0)
   1.17.0

> Drill should manage Query Profiling archiving
> -
>
> Key: DRILL-2362
> URL: https://issues.apache.org/jira/browse/DRILL-2362
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Affects Versions: 0.7.0
>Reporter: Chris Westin
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> We collect query profile information for analysis purposes, but we keep it 
> forever. At this time, for a few queries, it isn't a problem. But as users 
> start putting Drill into production, automated use via other applications 
> will make this grow quickly. We need to come up with a retention policy 
> mechanism, with suitable settings administrators can use, and implement it so 
> that this data can be cleaned up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7110) Skip writing profile when an ALTER SESSION is executed

2019-03-18 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7110:
---

 Summary: Skip writing profile when an ALTER SESSION is executed
 Key: DRILL-7110
 URL: https://issues.apache.org/jira/browse/DRILL-7110
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Monitoring
Affects Versions: 1.16.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.17.0


Currently, any {{ALTER }} query will be logged. While this is useful, it 
can potentially add up to a lot of profiles being written unnecessarily, since 
those changes are also reflected on the queries that follow.

This JIRA is proposing an option to skip writing such profiles to the profile 
store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7061) Selecting option to limit results to 1000 on web UI causes parse error

2019-03-12 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790831#comment-16790831
 ] 

Kunal Khatua commented on DRILL-7061:
-

Yes. I've removed it.

> Selecting option to limit results to 1000 on web UI causes parse error
> --
>
> Key: DRILL-7061
> URL: https://issues.apache.org/jira/browse/DRILL-7061
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Critical
> Fix For: 1.16.0
>
> Attachments: image-2019-02-27-14-17-24-348.png
>
>
> Selecting option to Limit results to 1,000 causes a parse error on web UI, 
> screen shot is attached. Browser used was Chrome.
> Drill version => 1.16.0-SNAPSHOT
> commit = e342ff5
> Error reported on web UI when we press Submit button on web UI
> {noformat}
> Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 'LIMIT 
> start, count' is not allowed under the current SQL conformance level SQL 
> Query -- [autoLimit: 1,000 rows] select * from ( select length(varStr) from 
> dfs.`/root/many_json_files` ) limit 1,000 [Error Id: 
> e252d1cc-54d4-4530-837c-a1726a5be89f on qa102-45.qa.lab:31010]{noformat}
>  Stack trace from drillbit.log
> {noformat}
> 2019-02-27 21:59:18,428 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2 issued by anonymous: -- [autoLimit: 
> 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> 2019-02-27 21:59:18,438 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.d.exec.planner.sql.SqlConverter - User Error Occurred: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level ('LIMIT start, 
> count' is not allowed under the current SQL conformance level)
> org.apache.drill.common.exceptions.UserException: PARSE ERROR: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> SQL Query -- [autoLimit: 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> [Error Id: 286b7236-bafd-4ddc-ab10-aaac07e5c088 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[drill-common-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:193) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:138)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:110)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:76)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:584) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:272) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
> Caused by: org.apache.calcite.sql.parser.SqlParseException: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.convertException(DrillParserImpl.java:357)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.normalizeException(DrillParserImpl.java:145)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:156) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:181) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:185) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> ... 8 common frames omitted
> Caused by: org.apache.drill.exec.planner.sql.parser.impl.ParseException: 
> 'LIMIT start, count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.OrderedQueryOrExpr(DrillParserImpl.java:489)
>  

[jira] [Reopened] (DRILL-7061) Selecting option to limit results to 1000 on web UI causes parse error

2019-03-11 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reopened DRILL-7061:
-

Making this fix independent of DRILL-6960

> Selecting option to limit results to 1000 on web UI causes parse error
> --
>
> Key: DRILL-7061
> URL: https://issues.apache.org/jira/browse/DRILL-7061
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Critical
> Fix For: 1.16.0
>
> Attachments: image-2019-02-27-14-17-24-348.png
>
>
> Selecting option to Limit results to 1,000 causes a parse error on web UI, 
> screen shot is attached. Browser used was Chrome.
> Drill version => 1.16.0-SNAPSHOT
> commit = e342ff5
> Error reported on web UI when we press Submit button on web UI
> {noformat}
> Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 'LIMIT 
> start, count' is not allowed under the current SQL conformance level SQL 
> Query -- [autoLimit: 1,000 rows] select * from ( select length(varStr) from 
> dfs.`/root/many_json_files` ) limit 1,000 [Error Id: 
> e252d1cc-54d4-4530-837c-a1726a5be89f on qa102-45.qa.lab:31010]{noformat}
>  Stack trace from drillbit.log
> {noformat}
> 2019-02-27 21:59:18,428 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2 issued by anonymous: -- [autoLimit: 
> 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> 2019-02-27 21:59:18,438 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.d.exec.planner.sql.SqlConverter - User Error Occurred: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level ('LIMIT start, 
> count' is not allowed under the current SQL conformance level)
> org.apache.drill.common.exceptions.UserException: PARSE ERROR: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> SQL Query -- [autoLimit: 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> [Error Id: 286b7236-bafd-4ddc-ab10-aaac07e5c088 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[drill-common-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:193) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:138)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:110)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:76)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:584) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:272) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
> Caused by: org.apache.calcite.sql.parser.SqlParseException: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.convertException(DrillParserImpl.java:357)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.normalizeException(DrillParserImpl.java:145)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:156) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:181) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:185) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> ... 8 common frames omitted
> Caused by: org.apache.drill.exec.planner.sql.parser.impl.ParseException: 
> 'LIMIT start, count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.OrderedQueryOrExpr(DrillParserImpl.java:489)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> 

[jira] [Closed] (DRILL-5509) Upgrade Drill protobuf support from 2.5.0 to latest 3.3

2019-03-06 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua closed DRILL-5509.
---
   Resolution: Duplicate
 Assignee: Anton Gozhiy
Fix Version/s: 1.16.0

Protobuf upgraded to 3.6

> Upgrade Drill protobuf support from 2.5.0 to latest 3.3
> ---
>
> Key: DRILL-5509
> URL: https://issues.apache.org/jira/browse/DRILL-5509
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.10.0
>Reporter: Paul Rogers
>Assignee: Anton Gozhiy
>Priority: Minor
> Fix For: 1.16.0
>
>
> Drill uses Google Protobufs for RPC. Drill's Maven compile requires version 
> 2.5.0 from Feb. 2013. The latest version is 3.3. Over time, it may become 
> increasingly hard to find and build a four-year-old version.
> Upgrade Drill to use the latest Protobuf version. This will require updating 
> the Maven protobuf plugin, and may require other upgrades as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7051) Upgrade jetty

2019-03-05 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784821#comment-16784821
 ] 

Kunal Khatua commented on DRILL-7051:
-

Ok. Assigned it back to you, [~vitalii]. Thanks

> Upgrade jetty 
> --
>
> Key: DRILL-7051
> URL: https://issues.apache.org/jira/browse/DRILL-7051
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Veera Naranammalpuram
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 1.16.0
>
>
> Is Drill using a version of jetty web server that's really old? The jar's 
> suggest it's using jetty 9.1 that was built sometime in 2014? 
> {noformat}
> -rw-r--r-- 1 veeranaranammalpuram staff 15988 Nov 20 2017 
> jetty-continuation-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 103288 Nov 20 2017 
> jetty-http-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 101519 Nov 20 2017 
> jetty-io-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 95906 Nov 20 2017 
> jetty-security-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 401593 Nov 20 2017 
> jetty-server-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 110992 Nov 20 2017 
> jetty-servlet-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 119215 Nov 20 2017 
> jetty-servlets-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 341683 Nov 20 2017 
> jetty-util-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 38707 Dec 21 15:42 
> jetty-util-ajax-9.3.19.v20170502.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 111466 Nov 20 2017 
> jetty-webapp-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 41763 Nov 20 2017 
> jetty-xml-9.1.1.v20140108.jar {noformat}
> This version is shown as deprecated: 
> [https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Opening this to upgrade jetty to the latest stable supported version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-7051) Upgrade jetty

2019-03-05 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reassigned DRILL-7051:
---

Assignee: Vitalii Diravka  (was: Kunal Khatua)

> Upgrade jetty 
> --
>
> Key: DRILL-7051
> URL: https://issues.apache.org/jira/browse/DRILL-7051
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Veera Naranammalpuram
>Assignee: Vitalii Diravka
>Priority: Major
> Fix For: 1.16.0
>
>
> Is Drill using a version of jetty web server that's really old? The jar's 
> suggest it's using jetty 9.1 that was built sometime in 2014? 
> {noformat}
> -rw-r--r-- 1 veeranaranammalpuram staff 15988 Nov 20 2017 
> jetty-continuation-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 103288 Nov 20 2017 
> jetty-http-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 101519 Nov 20 2017 
> jetty-io-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 95906 Nov 20 2017 
> jetty-security-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 401593 Nov 20 2017 
> jetty-server-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 110992 Nov 20 2017 
> jetty-servlet-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 119215 Nov 20 2017 
> jetty-servlets-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 341683 Nov 20 2017 
> jetty-util-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 38707 Dec 21 15:42 
> jetty-util-ajax-9.3.19.v20170502.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 111466 Nov 20 2017 
> jetty-webapp-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 41763 Nov 20 2017 
> jetty-xml-9.1.1.v20140108.jar {noformat}
> This version is shown as deprecated: 
> [https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Opening this to upgrade jetty to the latest stable supported version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7061) Selecting option to limit results to 1000 on web UI causes parse error

2019-02-27 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779902#comment-16779902
 ] 

Kunal Khatua commented on DRILL-7061:
-

Temporary workaround is to remove the comma [{color:#d04437} {{,}} {color}] 
that appears in the field

> Selecting option to limit results to 1000 on web UI causes parse error
> --
>
> Key: DRILL-7061
> URL: https://issues.apache.org/jira/browse/DRILL-7061
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Critical
> Fix For: 1.16.0
>
> Attachments: image-2019-02-27-14-17-24-348.png
>
>
> Selecting option to Limit results to 1,000 causes a parse error on web UI, 
> screen shot is attached. Browser used was Chrome.
> Drill version => 1.16.0-SNAPSHOT
> commit = e342ff5
> Error reported on web UI when we press Submit button on web UI
> {noformat}
> Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 'LIMIT 
> start, count' is not allowed under the current SQL conformance level SQL 
> Query -- [autoLimit: 1,000 rows] select * from ( select length(varStr) from 
> dfs.`/root/many_json_files` ) limit 1,000 [Error Id: 
> e252d1cc-54d4-4530-837c-a1726a5be89f on qa102-45.qa.lab:31010]{noformat}
>  Stack trace from drillbit.log
> {noformat}
> 2019-02-27 21:59:18,428 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2 issued by anonymous: -- [autoLimit: 
> 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> 2019-02-27 21:59:18,438 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.d.exec.planner.sql.SqlConverter - User Error Occurred: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level ('LIMIT start, 
> count' is not allowed under the current SQL conformance level)
> org.apache.drill.common.exceptions.UserException: PARSE ERROR: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> SQL Query -- [autoLimit: 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> [Error Id: 286b7236-bafd-4ddc-ab10-aaac07e5c088 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[drill-common-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:193) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:138)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:110)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:76)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:584) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:272) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
> Caused by: org.apache.calcite.sql.parser.SqlParseException: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.convertException(DrillParserImpl.java:357)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.normalizeException(DrillParserImpl.java:145)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:156) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:181) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:185) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> ... 8 common frames omitted
> Caused by: org.apache.drill.exec.planner.sql.parser.impl.ParseException: 
> 'LIMIT start, count' is not allowed under the current SQL conformance level
> at 
> 

[jira] [Commented] (DRILL-7037) Apache Drill Crashes when a 50mb json string is queried via the REST API provided

2019-02-27 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779911#comment-16779911
 ] 

Kunal Khatua commented on DRILL-7037:
-

[~er.ayushsha...@gmail.com] one possible workaround would be to use temporary 
tables to dump your JSON data into and then work off that, since you mentioned 
that it works when querying the JSON file directly.

In the event of a crash or end of session, the data would be deleted as it is 
temporary in nature.

 

> Apache Drill Crashes when a 50mb json string is queried via the REST API 
> provided
> -
>
> Key: DRILL-7037
> URL: https://issues.apache.org/jira/browse/DRILL-7037
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.14.0
> Environment: Windows 10 
> 24GB RAM
> 8 Cores
> Used the REST API call to query drill
>Reporter: Ayush Sharma
>Priority: Blocker
>
> Apache Drill crashes with OutofMemoryException (24GB RAM) when a REST API 
> call is made by supplying a json of size 50MB in the query paramater of the 
> REST API.
> The REST API even crashes for a 10MB query (16GB RAM) and works with a 5MB 
> query.
> This is a blocker for us and will need immediate remediation.
> We are also not aware of any sys.options which might bring the HEAP size down 
> drastically or currently making it go up.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-7061) Selecting option to limit results to 1000 on web UI causes parse error

2019-02-27 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua resolved DRILL-7061.
-
   Resolution: Duplicate
Fix Version/s: 1.16.0

Duplicate of DRILL-6960 

> Selecting option to limit results to 1000 on web UI causes parse error
> --
>
> Key: DRILL-7061
> URL: https://issues.apache.org/jira/browse/DRILL-7061
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Critical
> Fix For: 1.16.0
>
> Attachments: image-2019-02-27-14-17-24-348.png
>
>
> Selecting option to Limit results to 1,000 causes a parse error on web UI, 
> screen shot is attached. Browser used was Chrome.
> Drill version => 1.16.0-SNAPSHOT
> commit = e342ff5
> Error reported on web UI when we press Submit button on web UI
> {noformat}
> Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 'LIMIT 
> start, count' is not allowed under the current SQL conformance level SQL 
> Query -- [autoLimit: 1,000 rows] select * from ( select length(varStr) from 
> dfs.`/root/many_json_files` ) limit 1,000 [Error Id: 
> e252d1cc-54d4-4530-837c-a1726a5be89f on qa102-45.qa.lab:31010]{noformat}
>  Stack trace from drillbit.log
> {noformat}
> 2019-02-27 21:59:18,428 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2 issued by anonymous: -- [autoLimit: 
> 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> 2019-02-27 21:59:18,438 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.d.exec.planner.sql.SqlConverter - User Error Occurred: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level ('LIMIT start, 
> count' is not allowed under the current SQL conformance level)
> org.apache.drill.common.exceptions.UserException: PARSE ERROR: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> SQL Query -- [autoLimit: 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> [Error Id: 286b7236-bafd-4ddc-ab10-aaac07e5c088 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[drill-common-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:193) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:138)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:110)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:76)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:584) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:272) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
> Caused by: org.apache.calcite.sql.parser.SqlParseException: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.convertException(DrillParserImpl.java:357)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.normalizeException(DrillParserImpl.java:145)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:156) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:181) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:185) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> ... 8 common frames omitted
> Caused by: org.apache.drill.exec.planner.sql.parser.impl.ParseException: 
> 'LIMIT start, count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.OrderedQueryOrExpr(DrillParserImpl.java:489)
>  

[jira] [Comment Edited] (DRILL-7061) Selecting option to limit results to 1000 on web UI causes parse error

2019-02-27 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779896#comment-16779896
 ] 

Kunal Khatua edited comment on DRILL-7061 at 2/27/19 11:31 PM:
---

Duplicate of DRILL-6960 

Fix is ready and under review


was (Author: kkhatua):
Duplicate of DRILL-6960 

> Selecting option to limit results to 1000 on web UI causes parse error
> --
>
> Key: DRILL-7061
> URL: https://issues.apache.org/jira/browse/DRILL-7061
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Critical
> Fix For: 1.16.0
>
> Attachments: image-2019-02-27-14-17-24-348.png
>
>
> Selecting option to Limit results to 1,000 causes a parse error on web UI, 
> screen shot is attached. Browser used was Chrome.
> Drill version => 1.16.0-SNAPSHOT
> commit = e342ff5
> Error reported on web UI when we press Submit button on web UI
> {noformat}
> Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 'LIMIT 
> start, count' is not allowed under the current SQL conformance level SQL 
> Query -- [autoLimit: 1,000 rows] select * from ( select length(varStr) from 
> dfs.`/root/many_json_files` ) limit 1,000 [Error Id: 
> e252d1cc-54d4-4530-837c-a1726a5be89f on qa102-45.qa.lab:31010]{noformat}
>  Stack trace from drillbit.log
> {noformat}
> 2019-02-27 21:59:18,428 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id 
> 2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2 issued by anonymous: -- [autoLimit: 
> 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> 2019-02-27 21:59:18,438 [2388f7c9-2cb4-0ef8-4088-3ffcab1f0ed2:foreman] INFO 
> o.a.d.exec.planner.sql.SqlConverter - User Error Occurred: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level ('LIMIT start, 
> count' is not allowed under the current SQL conformance level)
> org.apache.drill.common.exceptions.UserException: PARSE ERROR: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> SQL Query -- [autoLimit: 1,000 rows]
> select * from (
> select length(varStr) from dfs.`/root/many_json_files`
> ) limit 1,000
> [Error Id: 286b7236-bafd-4ddc-ab10-aaac07e5c088 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
>  ~[drill-common-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:193) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:138)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:110)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:76)
>  [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:584) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:272) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]
> Caused by: org.apache.calcite.sql.parser.SqlParseException: 'LIMIT start, 
> count' is not allowed under the current SQL conformance level
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.convertException(DrillParserImpl.java:357)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at 
> org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl.normalizeException(DrillParserImpl.java:145)
>  ~[drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:156) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:181) 
> ~[calcite-core-1.18.0-drill-r0.jar:1.18.0-drill-r0]
> at 
> org.apache.drill.exec.planner.sql.SqlConverter.parse(SqlConverter.java:185) 
> [drill-java-exec-1.16.0-SNAPSHOT.jar:1.16.0-SNAPSHOT]
> ... 8 common frames omitted
> Caused by: org.apache.drill.exec.planner.sql.parser.impl.ParseException: 
> 'LIMIT start, count' is not allowed under the current SQL conformance level
> at 
> 

[jira] [Updated] (DRILL-7051) Upgrade jetty

2019-02-27 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7051:

Fix Version/s: 1.16.0

> Upgrade jetty 
> --
>
> Key: DRILL-7051
> URL: https://issues.apache.org/jira/browse/DRILL-7051
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Veera Naranammalpuram
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.16.0
>
>
> Is Drill using a version of jetty web server that's really old? The jar's 
> suggest it's using jetty 9.1 that was built sometime in 2014? 
> {noformat}
> -rw-r--r-- 1 veeranaranammalpuram staff 15988 Nov 20 2017 
> jetty-continuation-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 103288 Nov 20 2017 
> jetty-http-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 101519 Nov 20 2017 
> jetty-io-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 95906 Nov 20 2017 
> jetty-security-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 401593 Nov 20 2017 
> jetty-server-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 110992 Nov 20 2017 
> jetty-servlet-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 119215 Nov 20 2017 
> jetty-servlets-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 341683 Nov 20 2017 
> jetty-util-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 38707 Dec 21 15:42 
> jetty-util-ajax-9.3.19.v20170502.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 111466 Nov 20 2017 
> jetty-webapp-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 41763 Nov 20 2017 
> jetty-xml-9.1.1.v20140108.jar {noformat}
> This version is shown as deprecated: 
> [https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Opening this to upgrade jetty to the latest stable supported version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7051) Upgrade jetty

2019-02-27 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779871#comment-16779871
 ] 

Kunal Khatua commented on DRILL-7051:
-

[~vitalii]

I'm testing an upgraded version of Jetty ( 
[https://mvnrepository.com/artifact/org.eclipse.jetty/jetty-server/9.4.15.v20190215]
 ): 

c505b2370030f9e75b4092764e143153a6810a51

I'm not sure if \{{org.mortbay.jetty}} needs an upgrade. I'll verify that there 
are no regressions in the meanwhile. 

> Upgrade jetty 
> --
>
> Key: DRILL-7051
> URL: https://issues.apache.org/jira/browse/DRILL-7051
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Veera Naranammalpuram
>Assignee: Vitalii Diravka
>Priority: Major
>
> Is Drill using a version of jetty web server that's really old? The jar's 
> suggest it's using jetty 9.1 that was built sometime in 2014? 
> {noformat}
> -rw-r--r-- 1 veeranaranammalpuram staff 15988 Nov 20 2017 
> jetty-continuation-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 103288 Nov 20 2017 
> jetty-http-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 101519 Nov 20 2017 
> jetty-io-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 95906 Nov 20 2017 
> jetty-security-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 401593 Nov 20 2017 
> jetty-server-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 110992 Nov 20 2017 
> jetty-servlet-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 119215 Nov 20 2017 
> jetty-servlets-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 341683 Nov 20 2017 
> jetty-util-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 38707 Dec 21 15:42 
> jetty-util-ajax-9.3.19.v20170502.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 111466 Nov 20 2017 
> jetty-webapp-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 41763 Nov 20 2017 
> jetty-xml-9.1.1.v20140108.jar {noformat}
> This version is shown as deprecated: 
> [https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Opening this to upgrade jetty to the latest stable supported version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-7051) Upgrade jetty

2019-02-27 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reassigned DRILL-7051:
---

Assignee: Kunal Khatua  (was: Vitalii Diravka)

> Upgrade jetty 
> --
>
> Key: DRILL-7051
> URL: https://issues.apache.org/jira/browse/DRILL-7051
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Veera Naranammalpuram
>Assignee: Kunal Khatua
>Priority: Major
>
> Is Drill using a version of jetty web server that's really old? The jar's 
> suggest it's using jetty 9.1 that was built sometime in 2014? 
> {noformat}
> -rw-r--r-- 1 veeranaranammalpuram staff 15988 Nov 20 2017 
> jetty-continuation-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 103288 Nov 20 2017 
> jetty-http-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 101519 Nov 20 2017 
> jetty-io-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 95906 Nov 20 2017 
> jetty-security-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 401593 Nov 20 2017 
> jetty-server-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 110992 Nov 20 2017 
> jetty-servlet-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 119215 Nov 20 2017 
> jetty-servlets-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 341683 Nov 20 2017 
> jetty-util-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 38707 Dec 21 15:42 
> jetty-util-ajax-9.3.19.v20170502.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 111466 Nov 20 2017 
> jetty-webapp-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 41763 Nov 20 2017 
> jetty-xml-9.1.1.v20140108.jar {noformat}
> This version is shown as deprecated: 
> [https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Opening this to upgrade jetty to the latest stable supported version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-6540) Upgrade to HADOOP-3.1 libraries

2019-02-27 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua reassigned DRILL-6540:
---

Assignee: Anton Gozhiy

> Upgrade to HADOOP-3.1 libraries 
> 
>
> Key: DRILL-6540
> URL: https://issues.apache.org/jira/browse/DRILL-6540
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build  Test
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Anton Gozhiy
>Priority: Major
> Fix For: Future
>
>
> Currently Drill uses 2.7.4 version of hadoop libraries (hadoop-common, 
> hadoop-hdfs, hadoop-annotations, hadoop-aws, hadoop-yarn-api, hadoop-client, 
> hadoop-yarn-client).
>  Half of year ago the [Hadoop 
> 3.0|https://hadoop.apache.org/docs/r3.0.0/index.html] was released and 
> recently it was an update - [Hadoop 
> 3.2.0|https://hadoop.apache.org/docs/r3.2.0/].
> To use Drill under Hadoop3.0 distribution we need this upgrade. Also the 
> newer version includes new features, which can be useful for Drill.
>  This upgrade is also needed to leverage the newest version of Zookeeper 
> libraries and Hive 3.1 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (DRILL-6937) sys.functions table needs a fix in the names column

2019-02-26 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua closed DRILL-6937.
---
Resolution: Not A Bug

> sys.functions table needs a fix in the names column
> ---
>
> Key: DRILL-6937
> URL: https://issues.apache.org/jira/browse/DRILL-6937
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.15.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.16.0
>
>
> The function names in the name column of sys.functions in some cases, are the 
> operators, this is not the expected behavior, the name column should have 
> actual names and not the operators.
> I am on Drill 1.15.0 commit : 8743e8f1e8d5bca4d67c94d07a8560ad356ff2b6
> {noformat}
> Apache Drill 1.15.0
> "Data is the new oil. Ready to Drill some?"
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from sys.functions;
> +-+
> | EXPR$0 |
> +-+
> | 2846 |
> +-+
> 1 row selected (0.327 seconds)
> 0: jdbc:drill:schema=dfs.tmp>
> {noformat}
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select distinct name from sys.functions limit 
> 12;
> ++
> | name |
> ++
> | != |
> | $sum0 |
> | && |
> | - |
> | /int |
> | < |
> | <= |
> | <> |
> | = |
> | == |
> | > |
> | >= |
> ++
> 12 rows selected (0.175 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (DRILL-5696) Change default compiler strategy

2019-02-26 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-5696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua closed DRILL-5696.
---
Resolution: Cannot Reproduce

> Change default compiler strategy
> 
>
> Key: DRILL-5696
> URL: https://issues.apache.org/jira/browse/DRILL-5696
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Codegen
>Affects Versions: 1.9.0, 1.10.0, 1.11.0
>Reporter: weijie.tong
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: Future, 1.16.0
>
>
> at our production ,when we have more than 20 agg expression, the  compile 
> time is high using the default janino.  but when changed to  jdk compiler,we 
> gain fewer compile time than the janino one. Our product jdk version is 1.8. 
> So the default one should be JDK , if user's jdk version is upper than 1.7. 
> We should add another check condition to the ClassCompilerSelector.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7048) Implement JDBC Statement.setMaxRows() with System Option

2019-02-22 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7048:

Summary: Implement JDBC Statement.setMaxRows() with System Option  (was: 
Implement JDBC Statement.setMaxRows() )

> Implement JDBC Statement.setMaxRows() with System Option
> 
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.16.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6800) Simplify packaging of Jdbc-all jar

2019-02-21 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6800:

Fix Version/s: 2.0.0
   Future

> Simplify packaging of Jdbc-all jar
> --
>
> Key: DRILL-6800
> URL: https://issues.apache.org/jira/browse/DRILL-6800
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Priority: Major
> Fix For: Future, 2.0.0
>
>
> Today Jdbc-all package is created using drill-java-exec as dependency and 
> then excluding unnecessary dependency. Also there is size check for jdbc-all 
> jar to avoid including any unwanted dependency. But configured size has 
> increased over time and doesn't really provide a good mechanism to enforce 
> small footprint of jdbc-all jar. Following are some recommendation to improve 
> it:
>  1) Divide java-exec module into separate client/server and common module
>  2) Have size check for client artifact only.
>  3) Update jdbc-all pom to include newly created client artifact and jdbc 
> driver artifact. 
>  * Have multiple profiles to include and exclude any profile specific 
> dependency. For 
>  example MapR profile will exclude hadoop dependency whereas apache profile 
> will 
>  include it.
>  * We can create 2 artifacts for jdbc-all: one with and other without (for 
> smaller jar size) Hadoop dependencies.
> 4) Update client side protobuf to not have server side definitions like 
> QueryProfile / CoreOperatorType etc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (DRILL-3931) Upgrade fileclient dependency in mapr profile

2019-02-21 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-3931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua closed DRILL-3931.
---
Resolution: Auto Closed

> Upgrade fileclient dependency in mapr profile 
> --
>
> Key: DRILL-3931
> URL: https://issues.apache.org/jira/browse/DRILL-3931
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
>Priority: Major
> Fix For: Future
>
>
> Current dependency version is 4.1.0-mapr. There is a critical fix that went 
> into 4.1.0.34989-mapr. Upgrade the dependency version to 4.1.0.34989-mapr. 
> Only pom file changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7036) Improve UI for alert and error messages

2019-02-21 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7036:

Labels: ready-to-commit user-experience  (was: user-experience)

> Improve UI for alert and error messages
> ---
>
> Key: DRILL-7036
> URL: https://issues.apache.org/jira/browse/DRILL-7036
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit, user-experience
> Fix For: 1.16.0
>
>
> Currently, the WebUI has a rather inconsistent user experience when it comes 
> to dealing with errors and exceptions.
> This Jira proposes standardizing that to a cleaner interface by leveraging 
> Bootstraps modals and panels for publishing the messages in a presentable 
> format



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7048) Implement JDBC Statement.setMaxRows()

2019-02-21 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7048:

Issue Type: New Feature  (was: Sub-task)
Parent: (was: DRILL-6960)

> Implement JDBC Statement.setMaxRows() 
> --
>
> Key: DRILL-7048
> URL: https://issues.apache.org/jira/browse/DRILL-7048
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Client - JDBC, Query Planning  Optimization
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.16.0
>
>
> With DRILL-6960, the webUI will get an auto-limit on the number of results 
> fetched.
> Since more of the plumbing is already there, it makes sense to provide the 
> same for the JDBC client.
> In addition, it would be nice if the Server can have a pre-defined value as 
> well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
> max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7048) Implement JDBC Statement.setMaxRows()

2019-02-21 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7048:
---

 Summary: Implement JDBC Statement.setMaxRows() 
 Key: DRILL-7048
 URL: https://issues.apache.org/jira/browse/DRILL-7048
 Project: Apache Drill
  Issue Type: Sub-task
  Components: Client - JDBC, Query Planning  Optimization
Affects Versions: 1.15.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.16.0


With DRILL-6960, the webUI will get an auto-limit on the number of results 
fetched.

Since more of the plumbing is already there, it makes sense to provide the same 
for the JDBC client.

In addition, it would be nice if the Server can have a pre-defined value as 
well (default 0; i.e. no limit) so that an _admin_ would be able to ensure a 
max limit on the resultset size as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7036) Improve UI for alert and error messages

2019-02-21 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-7036:

Reviewer: Sorabh Hamirwasia

> Improve UI for alert and error messages
> ---
>
> Key: DRILL-7036
> URL: https://issues.apache.org/jira/browse/DRILL-7036
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: user-experience
> Fix For: 1.16.0
>
>
> Currently, the WebUI has a rather inconsistent user experience when it comes 
> to dealing with errors and exceptions.
> This Jira proposes standardizing that to a cleaner interface by leveraging 
> Bootstraps modals and panels for publishing the messages in a presentable 
> format



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-2362) Drill should manage Query Profiling archiving

2019-02-15 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-2362:

Fix Version/s: 1.16.0

> Drill should manage Query Profiling archiving
> -
>
> Key: DRILL-2362
> URL: https://issues.apache.org/jira/browse/DRILL-2362
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Affects Versions: 0.7.0
>Reporter: Chris Westin
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.16.0
>
>
> We collect query profile information for analysis purposes, but we keep it 
> forever. At this time, for a few queries, it isn't a problem. But as users 
> start putting Drill into production, automated use via other applications 
> will make this grow quickly. We need to come up with a retention policy 
> mechanism, with suitable settings administrators can use, and implement it so 
> that this data can be cleaned up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7036) Improve UI for alert and error messages

2019-02-11 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-7036:
---

 Summary: Improve UI for alert and error messages
 Key: DRILL-7036
 URL: https://issues.apache.org/jira/browse/DRILL-7036
 Project: Apache Drill
  Issue Type: Improvement
  Components: Web Server
Affects Versions: 1.15.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.16.0


Currently, the WebUI has a rather inconsistent user experience when it comes to 
dealing with errors and exceptions.

This Jira proposes standardizing that to a cleaner interface by leveraging 
Bootstraps modals and panels for publishing the messages in a presentable format



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6960) Auto Limit Wrapping should not apply to non-select query

2019-02-06 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6960:

Labels: user-experience  (was: doc-impacting user-experience)

> Auto Limit Wrapping should not apply to non-select query
> 
>
> Key: DRILL-6960
> URL: https://issues.apache.org/jira/browse/DRILL-6960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: user-experience
> Fix For: 1.16.0
>
>
> [~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries 
> with incorrect syntax. 
> For example, when user enters {{SHOW DATABASES}}' and after limitation 
> wrapping {{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
> This results into parsing errors, like:
> {{Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "( show" at line 2, column 15. Was expecting one of:  
> ... }}.
> The fix should involve a javascript check for all non-select queries and not 
> apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6960) Auto Limit Wrapping should not apply to non-select query

2019-02-04 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6960:

Labels: doc-impacting user-experience  (was: user-experience)

> Auto Limit Wrapping should not apply to non-select query
> 
>
> Key: DRILL-6960
> URL: https://issues.apache.org/jira/browse/DRILL-6960
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: doc-impacting, user-experience
> Fix For: 1.16.0
>
>
> [~IhorHuzenko] pointed out that DRILL-6050 can cause submission of queries 
> with incorrect syntax. 
> For example, when user enters {{SHOW DATABASES}}' and after limitation 
> wrapping {{SELECT * FROM (SHOW DATABASES) LIMIT 10}} will be posted. 
> This results into parsing errors, like:
> {{Query Failed: An Error Occurred 
> org.apache.drill.common.exceptions.UserRemoteException: PARSE ERROR: 
> Encountered "( show" at line 2, column 15. Was expecting one of:  
> ... }}.
> The fix should involve a javascript check for all non-select queries and not 
> apply the LIMIT wrap for those queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6983) PAM Auth Enabled on Drill-On-YARN only works on YARN user

2019-01-23 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750586#comment-16750586
 ] 

Kunal Khatua commented on DRILL-6983:
-

[~mikehomee] this *might* be a config/setup issue. Can you try asking in the 
User mailing list? 

[https://drill.apache.org/mailinglists/]

Also, take a look in the archives as this might have been addressed previously.

> PAM Auth Enabled on Drill-On-YARN only works on YARN user
> -
>
> Key: DRILL-6983
> URL: https://issues.apache.org/jira/browse/DRILL-6983
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Michael Dennis Uanang
>Priority: Major
> Attachments: Selection_999(203).png, Selection_999(204).png, 
> Selection_999(205).png
>
>
> Hi,
> I'm having problem running Drill-on-YARN with PAM authentication enabled. PAM 
> auth is working, BUT only accepting login via WEBUI for YARN user.
> _drill-override.conf_
>  
> {code:java}
> drill.exec: {
>  cluster-id: "drillbits2",
>  zk.connect: "app40:2181,app41:2181,app42:2181",
>  impersonation: {
>   enabled: true
>  },
> security: {
>   auth.mechanisms: [ "PLAIN" ],
>   user.auth.enabled: true,
>   user.auth.packages += "org.apache.drill.exec.rpc.user.security",
>   user.auth.impl: "pam",  
>   user.auth.pam_profiles: [ "login", "sshd" ]
>   }
> }
> {code}
>  
>  
> SEE errors below:
> !Selection_999(204).png!
>  
> !Selection_999(203).png!
> As you can see from the screenshot, when trying to login via WEBUI using 
> infra or drill user, I'm having error 'password check failed for user 
> (USER)`. But you'll also notice that it's giving me authentication failure 
> for UID=1018 which is YARN 
> !Selection_999(205).png!
>  
> Please help me to right direction or if I'm missing something.
> Thank you.
>  
> MD
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6994) TIMESTAMP type DOB column in Spark parquet is treated as VARBINARY in Drill

2019-01-22 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16749539#comment-16749539
 ] 

Kunal Khatua commented on DRILL-6994:
-

[~khfaraaz] what does the schema look like according to the \{{parquet-tools}} 
utility? 

> TIMESTAMP type DOB column in Spark parquet is treated as VARBINARY in Drill
> ---
>
> Key: DRILL-6994
> URL: https://issues.apache.org/jira/browse/DRILL-6994
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.14.0
>Reporter: Khurram Faraaz
>Priority: Major
>
> A timestamp type column in a parquet file created from Spark is treated as 
> VARBINARY by Drill 1.14.0., Trying to cast DOB column to DATE results in an 
> Exception, although the monthOfYear field is in the allowed range.
> Data used in the test
> {noformat}
> [test@md123 spark_data]# cat inferSchema_example.csv
> Name,Department,years_of_experience,DOB
> Sam,Software,5,1990-10-10
> Alex,Data Analytics,3,1992-10-10
> {noformat}
> Create the parquet file using the above CSV file
> {noformat}
> [test@md123 bin]# ./spark-shell
> 19/01/22 21:21:34 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> Spark context Web UI available at http://md123.qa.lab:4040
> Spark context available as 'sc' (master = local[*], app id = 
> local-1548192099796).
> Spark session available as 'spark'.
> Welcome to
>   __
>  / __/__ ___ _/ /__
>  _\ \/ _ \/ _ `/ __/ '_/
>  /___/ .__/\_,_/_/ /_/\_\ version 2.3.1-mapr-SNAPSHOT
>  /_/
> Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_191)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> import org.apache.spark.sql.\{DataFrame, SQLContext}
> import org.apache.spark.sql.\{DataFrame, SQLContext}
> scala> import org.apache.spark.\{SparkConf, SparkContext}
> import org.apache.spark.\{SparkConf, SparkContext}
> scala> val sqlContext: SQLContext = new SQLContext(sc)
> warning: there was one deprecation warning; re-run with -deprecation for 
> details
> sqlContext: org.apache.spark.sql.SQLContext = 
> org.apache.spark.sql.SQLContext@2e0163cb
> scala> val df = 
> sqlContext.read.format("com.databricks.spark.csv").option("header", 
> "true").option("inferSchema", "true").load("/apps/inferSchema_example.csv")
> df: org.apache.spark.sql.DataFrame = [Name: string, Department: string ... 2 
> more fields]
> scala> df.printSchema
> test
>  |-- Name: string (nullable = true)
>  |-- Department: string (nullable = true)
>  |-- years_of_experience: integer (nullable = true)
>  |-- DOB: timestamp (nullable = true)
> scala> df.write.parquet("/apps/infer_schema_example.parquet")
> // Read the parquet file
> scala> val data = 
> sqlContext.read.parquet("/apps/infer_schema_example.parquet")
> data: org.apache.spark.sql.DataFrame = [Name: string, Department: string ... 
> 2 more fields]
> // Print the schema of the parquet file from Spark
> scala> data.printSchema
> test
>  |-- Name: string (nullable = true)
>  |-- Department: string (nullable = true)
>  |-- years_of_experience: integer (nullable = true)
>  |-- DOB: timestamp (nullable = true)
> // Display the contents of parquet file on spark-shell
> // register temp table and do a show on all records,to display.
> scala> data.registerTempTable("employee")
> warning: there was one deprecation warning; re-run with -deprecation for 
> details
> scala> val allrecords = sqlContext.sql("SELeCT * FROM employee")
> allrecords: org.apache.spark.sql.DataFrame = [Name: string, Department: 
> string ... 2 more fields]
> scala> allrecords.show()
> ++--+---+---+
> |Name| Department|years_of_experience| DOB|
> ++--+---+---+
> | Sam| Software| 5|1990-10-10 00:00:00|
> |Alex|Data Analytics| 3|1992-10-10 00:00:00|
> ++--+---+---+
> {noformat}
> Querying the parquet file from Drill 1.14.0-mapr, results in the DOB column 
> (timestamp type in Spark) being treated as VARBINARY.
> {noformat}
> apache drill 1.14.0-mapr
> "a little sql for your nosql"
> 0: jdbc:drill:schema=dfs.tmp> select * from 
> dfs.`/apps/infer_schema_example.parquet`;
> +---+-+--+--+
> | Name | Department | years_of_experience | DOB |
> +---+-+--+--+
> | Sam | Software | 5 | [B@2bef51f2 |
> | Alex | Data Analytics | 3 | [B@650eab8 |
> +---+-+--+--+
> 2 rows selected (0.229 seconds)
> // typeof(DOB) column returns a VARBINARY type, whereas the parquet schema in 
> Spark for DOB: 

[jira] [Commented] (DRILL-5807) ambiguous error

2019-01-22 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16749538#comment-16749538
 ] 

Kunal Khatua commented on DRILL-5807:
-

I'm wondering if moving the aliases within the parenthesis might resolve the 
issue.
e.g. 
{code}... FROM "dws_tb_crm_u2_itm_base_df" d0  ...{code}



> ambiguous error
> ---
>
> Key: DRILL-5807
> URL: https://issues.apache.org/jira/browse/DRILL-5807
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Affects Versions: 1.11.0
> Environment: Linux
>Reporter: XiaHang
>Priority: Critical
>
> if the final plan like below , JdbcFilter is below a JdbcJoin and above 
> another JdbcJoin . 
> JdbcProject(order_id=[$0], mord_id=[$6], item_id=[$2], div_pay_amt=[$5], 
> item_quantity=[$4], slr_id=[$11]): rowcount = 5625.0, cumulative cost = 
> {12540.0 rows, 29763.0 cpu, 0.0 io}, id = 327
> JdbcJoin(condition=[=($3, $11)], joinType=[left]): rowcount = 5625.0, 
> cumulative cost = {8040.0 rows, 2763.0 cpu, 0.0 io}, id = 325
>   JdbcFilter(condition=[OR(AND(OR(IS NOT NULL($7), >($5, 0)), =($1, 2), 
> OR(AND(=($10, '箱包皮具/热销女包/男包'), >(/($5, $4), 1000)), AND(OR(=($10, '家装主材'), 
> =($10, '大家电')), >(/($5, $4), 1000)), AND(OR(=($10, '珠宝/钻石/翡翠/黄金'), =($10, 
> '饰品/流行首饰/时尚饰品新')), >(/($5, $4), 2000)), AND(>(/($5, $4), 500), <>($10, 
> '箱包皮具/热销女包/男包'), <>($10, '家装主材'), <>($10, '大家电'), <>($10, '珠宝/钻石/翡翠/黄金'), 
> <>($10, '饰品/流行首饰/时尚饰品新'))), <>($10, '成人用品/情趣用品'), <>($10, '鲜花速递/花卉仿真/绿植园艺'), 
> <>($10, '水产肉类/新鲜蔬果/熟食')), AND(<=(-(EXTRACT(FLAG(EPOCH), CURRENT_TIMESTAMP), 
> EXTRACT(FLAG(EPOCH), CAST($8):TIMESTAMP(0))), *(*(*(14, 24), 60), 60)), 
> OR(AND(OR(=($10, '箱包皮具/热销女包/男包'), =($10, '家装主材'), =($10, '大家电'), =($10, 
> '珠宝/钻石/翡翠/黄金'), =($10, '饰品/流行首饰/时尚饰品新')), >(/($5, $4), 2000)), AND(OR(=($10, 
> '男装'), =($10, '女装/女士精品'), =($10, '办公设备/耗材/相关服务')), >(/($5, $4), 1000)), 
> AND(OR(=($10, '流行男鞋'), =($10, '女鞋')), >(/($5, $4), 1500))), IS NOT NULL($8)), 
> AND(>=(-(EXTRACT(FLAG(EPOCH), CURRENT_TIMESTAMP), EXTRACT(FLAG(EPOCH), 
> CAST($8):TIMESTAMP(0))), *(*(*(15, 24), 60), 60)), <=(-(EXTRACT(FLAG(EPOCH), 
> CURRENT_TIMESTAMP), EXTRACT(FLAG(EPOCH), CAST($8):TIMESTAMP(0))), *(*(*(60, 
> 24), 60), 60)), OR(AND(OR(=($10, '箱包皮具/热销女包/男包'), =($10, '珠宝/钻石/翡翠/黄金'), 
> =($10, '饰品/流行首饰/时尚饰品新')), >(/($5, $4), 5000)), AND(OR(=($10, '男装'), =($10, 
> '女装/女士精品')), >(/($5, $4), 3000)), AND(OR(=($10, '流行男鞋'), =($10, '女鞋')), 
> >(/($5, $4), 2500)), AND(=($10, '办公设备/耗材/相关服务'), >(/($5, $4), 2000))), IS NOT 
> NULL($8)))]): rowcount = 375.0, cumulative cost = {2235.0 rows, 2582.0 cpu, 
> 0.0 io}, id = 320
> JdbcJoin(condition=[=($2, $9)], joinType=[left]): rowcount = 1500.0, 
> cumulative cost = {1860.0 rows, 1082.0 cpu, 0.0 io}, id = 318
>   JdbcProject(order_id=[$0], pay_status=[$2], item_id=[$3], 
> seller_id=[$5], item_quantity=[$7], div_pay_amt=[$20], mord_id=[$1], 
> pay_time=[$19], succ_time=[$52]): rowcount = 100.0, cumulative cost = {180.0 
> rows, 821.0 cpu, 0.0 io}, id = 313
> JdbcTableScan(table=[[public, dws_tb_crm_u2_ord_base_df]]): 
> rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io}, id = 29
>   JdbcProject(item_id=[$0], cate_level1_name=[$47]): rowcount = 
> 100.0, cumulative cost = {180.0 rows, 261.0 cpu, 0.0 io}, id = 316
> JdbcTableScan(table=[[public, dws_tb_crm_u2_itm_base_df]]): 
> rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io}, id = 46
>   JdbcProject(slr_id=[$3]): rowcount = 100.0, cumulative cost = {180.0 
> rows, 181.0 cpu, 0.0 io}, id = 323
> JdbcTableScan(table=[[public, dws_tb_crm_u2_slr_base]]): rowcount = 
> 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io}, id = 68
> the sql is converted to 
> SELECT "t1"."order_id", "t1"."mord_id", "t1"."item_id", "t1"."div_pay_amt", 
> "t1"."item_quantity", "t2"."slr_id"
> FROM (SELECT *
> FROM (SELECT "order_id", "pay_status", "item_id", "seller_id", 
> "item_quantity", "div_pay_amt", "mord_id", "pay_time", "succ_time"
> FROM "dws_tb_crm_u2_ord_base_df") AS "t"
> LEFT JOIN (SELECT "item_id", "cate_level1_name"
> FROM "dws_tb_crm_u2_itm_base_df") AS "t0" ON "t"."item_id" = "t0"."item_id"
> WHERE ("t"."pay_time" IS NOT NULL OR "t"."div_pay_amt" > 0) AND 
> "t"."pay_status" = 2 AND ("t0"."cate_level1_name" = '箱包皮具/热销女包/男包' AND 
> "t"."div_pay_amt" / "t"."item_quantity" > 1000 OR ("t0"."cate_level1_name" = 
> '家装主材' OR "t0"."cate_level1_name" = '大家电') AND "t"."div_pay_amt" / 
> "t"."item_quantity" > 1000 OR ("t0"."cate_level1_name" = '珠宝/钻石/翡翠/黄金' OR 
> "t0"."cate_level1_name" = '饰品/流行首饰/时尚饰品新') AND "t"."div_pay_amt" / 
> "t"."item_quantity" > 2000 OR "t"."div_pay_amt" / "t"."item_quantity" > 500 
> AND "t0"."cate_level1_name" <> '箱包皮具/热销女包/男包' AND "t0"."cate_level1_name" <> 
> '家装主材' AND "t0"."cate_level1_name" <> '大家电' AND 

[jira] [Commented] (DRILL-6937) sys.functions table needs a fix in the names column

2019-01-21 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748442#comment-16748442
 ] 

Kunal Khatua commented on DRILL-6937:
-

I think, everything is implemented as a function. So in this case, the function 
is 
{{bit Z = greater_than(bigint X, bigint Y)}}
However, the SQL language parser is (probably) providing us syntactic sugar by 
making it easier to right
{{Z = X > Y}}

Both options are available to the user, we just tend to use what is more 
intuitive.
{code}
0: jdbc:drill:drillbit=kk127> select 2 > 1 from (values(1));
+-+
| EXPR$0  |
+-+
| true|
+-+
1 row selected (0.862 seconds)
0: jdbc:drill:drillbit=kk127> select greater_than(2,1) from (values(1));
+-+
| EXPR$0  |
+-+
| true|
+-+
1 row selected (0.453 seconds)
{code}

> sys.functions table needs a fix in the names column
> ---
>
> Key: DRILL-6937
> URL: https://issues.apache.org/jira/browse/DRILL-6937
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.15.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.16.0
>
>
> The function names in the name column of sys.functions in some cases, are the 
> operators, this is not the expected behavior, the name column should have 
> actual names and not the operators.
> I am on Drill 1.15.0 commit : 8743e8f1e8d5bca4d67c94d07a8560ad356ff2b6
> {noformat}
> Apache Drill 1.15.0
> "Data is the new oil. Ready to Drill some?"
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from sys.functions;
> +-+
> | EXPR$0 |
> +-+
> | 2846 |
> +-+
> 1 row selected (0.327 seconds)
> 0: jdbc:drill:schema=dfs.tmp>
> {noformat}
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select distinct name from sys.functions limit 
> 12;
> ++
> | name |
> ++
> | != |
> | $sum0 |
> | && |
> | - |
> | /int |
> | < |
> | <= |
> | <> |
> | = |
> | == |
> | > |
> | >= |
> ++
> 12 rows selected (0.175 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-6937) sys.functions table needs a fix in the names column

2019-01-21 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748280#comment-16748280
 ] 

Kunal Khatua edited comment on DRILL-6937 at 1/22/19 12:02 AM:
---

[~khfaraaz] the {{/int}} function name is correct. See 
https://github.com/apache/drill/blame/master/exec/java-exec/src/main/codegen/templates/DateIntervalFunctionTemplates/IntervalNumericArithmetic.java#L133

That said, I'm not sure if this is really a bug, since there have been other 
functions, like {{$sum0}} around as well:
https://github.com/apache/drill/blame/master/exec/java-exec/src/main/codegen/templates/SumZeroAggr.java#L48

Not all functions in the {{sys.functions}} table are necessarily utilized by 
users in their alphabetical representation, and are exposed to mathematical 
symbols as operators, but Drill implicitly transforms it into the appropriate 
function.

For example, the _equals_  , _less than_  or _greater than_ operator can be 
seen here in multiple forms.
{code}
0: jdbc:drill:drillbit=kk127> select * from sys.functions where returnType = 
'BIT' and signature = 'BIGINT-REQUIRED,BIGINT-REQUIRED'  limit 15;
+---+--+-+---+---+
|   name|signature | returnType  |  
source   | internal  |
+---+--+-+---+---+
| !=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| < | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| <=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| <>| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| = | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| ==| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| > | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| >=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| equal | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| greater_than  | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| greater_than_or_equal_to  | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| less_than | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| less_than_or_equal_to | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| not_equal | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
+---+--+-+---+---+
14 rows selected (0.282 seconds)
{code}

So, I'm not sure this would qualify as a bug. 
[~arina] do you agree?


was (Author: kkhatua):
[~khfaraaz] the {{/int}} function name is correct. See 
https://github.com/apache/drill/blame/master/exec/java-exec/src/main/codegen/templates/DateIntervalFunctionTemplates/IntervalNumericArithmetic.java#L133

That said, I'm not sure if this is really a bug, since there have been other 
functions, like {{$sum0}} around as well:
https://github.com/apache/drill/blame/master/exec/java-exec/src/main/codegen/templates/SumZeroAggr.java#L48

Not all functions in the {{sys.functions}} table are necessarily utilized by 
users in their alphabetical representation, and are exposed to mathematical 
symbols as operators.

For example, the _less than_  or _greater than_ operator can be seen here in 
multiple forms.
{code}
0: jdbc:drill:drillbit=kk127> select * from sys.functions where returnType = 
'BIT' and signature = 'BIGINT-REQUIRED,BIGINT-REQUIRED'  limit 15;
+---+--+-+---+---+
|   name|signature | returnType  |  
source   | internal  |
+---+--+-+---+---+
| !=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| < | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| <=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| <>| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| = | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| ==| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT

[jira] [Updated] (DRILL-6956) Maintain a single entry for Drill Version in the pom file

2019-01-21 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6956:

Due Date: 25/Feb/19

> Maintain a single entry for Drill Version in the pom file
> -
>
> Key: DRILL-6956
> URL: https://issues.apache.org/jira/browse/DRILL-6956
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.16.0
>
>
> Currently, updating the version information for a Drill release involves 
> updating 30+ pom files.
> The right way would be to use the Multi Module Setup for Maven CI.
> https://maven.apache.org/maven-ci-friendly.html#Multi_Module_Setup



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6937) sys.functions table needs a fix in the names column

2019-01-21 Thread Kunal Khatua (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16748280#comment-16748280
 ] 

Kunal Khatua commented on DRILL-6937:
-

[~khfaraaz] the {{/int}} function name is correct. See 
https://github.com/apache/drill/blame/master/exec/java-exec/src/main/codegen/templates/DateIntervalFunctionTemplates/IntervalNumericArithmetic.java#L133

That said, I'm not sure if this is really a bug, since there have been other 
functions, like {{$sum0}} around as well:
https://github.com/apache/drill/blame/master/exec/java-exec/src/main/codegen/templates/SumZeroAggr.java#L48

Not all functions in the {{sys.functions}} table are necessarily utilized by 
users in their alphabetical representation, and are exposed to mathematical 
symbols as operators.

For example, the _less than_  or _greater than_ operator can be seen here in 
multiple forms.
{code}
0: jdbc:drill:drillbit=kk127> select * from sys.functions where returnType = 
'BIT' and signature = 'BIGINT-REQUIRED,BIGINT-REQUIRED'  limit 15;
+---+--+-+---+---+
|   name|signature | returnType  |  
source   | internal  |
+---+--+-+---+---+
| !=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| < | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| <=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| <>| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| = | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| ==| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| > | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| >=| BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| equal | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| greater_than  | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| greater_than_or_equal_to  | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| less_than | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| less_than_or_equal_to | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
| not_equal | BIGINT-REQUIRED,BIGINT-REQUIRED  | BIT | 
built-in  | false |
+---+--+-+---+---+
14 rows selected (0.282 seconds)
{code}

So, I'm not sure this would qualify as a bug. 
[~arina] do you agree?

> sys.functions table needs a fix in the names column
> ---
>
> Key: DRILL-6937
> URL: https://issues.apache.org/jira/browse/DRILL-6937
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.15.0
>Reporter: Khurram Faraaz
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.16.0
>
>
> The function names in the name column of sys.functions in some cases, are the 
> operators, this is not the expected behavior, the name column should have 
> actual names and not the operators.
> I am on Drill 1.15.0 commit : 8743e8f1e8d5bca4d67c94d07a8560ad356ff2b6
> {noformat}
> Apache Drill 1.15.0
> "Data is the new oil. Ready to Drill some?"
> 0: jdbc:drill:schema=dfs.tmp> select count(*) from sys.functions;
> +-+
> | EXPR$0 |
> +-+
> | 2846 |
> +-+
> 1 row selected (0.327 seconds)
> 0: jdbc:drill:schema=dfs.tmp>
> {noformat}
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select distinct name from sys.functions limit 
> 12;
> ++
> | name |
> ++
> | != |
> | $sum0 |
> | && |
> | - |
> | /int |
> | < |
> | <= |
> | <> |
> | = |
> | == |
> | > |
> | >= |
> ++
> 12 rows selected (0.175 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >