[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488461#comment-16488461
 ] 

ASF GitHub Bot commented on DRILL-6435:
---

paul-rogers commented on issue #1286: DRILL-6435: MappingSet is stateful, so it 
can't be shared between threads
URL: https://github.com/apache/drill/pull/1286#issuecomment-391595679
 
 
   Thanks much for the explanation: I learned something new!
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
> Fix For: 1.14.0
>
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6179) Added pcapng-format support

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6179:
-
Reviewer: Paul Rogers

> Added pcapng-format support
> ---
>
> Key: DRILL-6179
> URL: https://issues.apache.org/jira/browse/DRILL-6179
> Project: Apache Drill
>  Issue Type: New Feature
>Affects Versions: 1.13.0
>Reporter: Vlad
>Assignee: Vlad
>Priority: Major
>  Labels: doc-impacting
> Fix For: 1.14.0
>
>
> The _PCAP Next Generation Dump File Format_ (or pcapng for short) [1] is an 
> attempt to overcome the limitations of the currently widely used (but 
> limited) libpcap format.
> At a first level, it is desirable to query and filter by source and 
> destination IP and port, and src/dest mac addreses or by protocol. Beyond 
> that, however, it would be very useful to be able to group packets by TCP 
> session and eventually to look at packet contents.
> Initial work is available at  
> https://github.com/mapr-demos/drill/tree/pcapng_dev
> [1] https://pcapng.github.io/pcapng/
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6415:
-
Reviewer: Timothy Farkas

> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> {code}
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)  Time 
> elapsed: 180.028 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203)
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242)
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454)
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405)
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
> {code}



--

[jira] [Commented] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488353#comment-16488353
 ] 

ASF GitHub Bot commented on DRILL-5977:
---

aravi5 commented on issue #1272: DRILL-5977: Filter Pushdown in Drill-Kafka 
plugin
URL: https://github.com/apache/drill/pull/1272#issuecomment-391573236
 
 
   Thanks @akumarb2010 for your comments. I have a few questions regarding your 
comments.
   
   >In Kafka, offset scope itself is per partition. I am unable to find any use 
case, where we can take the range of offsets and apply on all partitions. In 
most of the scenario's they may not be valid offsets.
   >IMHO, we should only apply predicate pushdown where we have exact scan 
specs.
   
   1. Take this query for example
   ```
   SELECT * FROM kafka.LogEventStream WHERE kafkaMsgOffset >= 1000 AND 
kafkaMsgOffset < 2000)
   ```
   
   Did you mean that we do not apply predicate pushdown for such conditions? If 
we do not pushdown, all partitions will be scanned from `startOffset` to 
`endOffset`. Instead, by supporting pushdown for such queries, we are limiting 
the scan range wherever possible. Are there any drawbacks of applying pushdown 
on such conditions?
   
   Also, pushdown implementation handles cases where specified offsets are 
invalid.
   
   2. Can you elaborate more on this?
   > we can use this predicate pushdown feature for external checkpointing 
mechanism.
   
   3. 
   >And coming to time stamps, my point is, in case of invalid partitionId, 
query might block indefinitely with this feature. Where as, without this 
feature, we will return empty results.
   
   Even with pushdown we will return empty results. Pushdown is applied to each 
of the predicates independently and merged. The implementation is such that it 
ensures `offsetsForTimes` is called only for valid partitions (i.e., partitions 
returned by `partitionsFor`). Hence we will not run into the situation where 
`offsetsForTimes` blocks infinitely.
   
   I will add test case for this situation. (I have already added cases for 
predicates with invalid offsets and timestamps.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-5977:
-
Reviewer: B Anil Kumar

> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6432) Allow to print the visualized query plan only

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6432:
-
Reviewer: Sorabh Hamirwasia

> Allow to print the visualized query plan only
> -
>
> Key: DRILL-6432
> URL: https://issues.apache.org/jira/browse/DRILL-6432
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Web Server
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Minor
> Fix For: 1.14.0
>
>
> Provide a convenient way to printing the Visual Query Plan only, instead of 
> the entire profile page.
> This allows for capability in specifying the zoom level when printing large 
> complex plans that might span multiple pages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488344#comment-16488344
 ] 

ASF GitHub Bot commented on DRILL-6438:
---

priteshm commented on issue #1284: DRILL-6438: Remove excess logging form some 
tests.
URL: https://github.com/apache/drill/pull/1284#issuecomment-391571988
 
 
   @vvysotskyi could you please review this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",

[jira] [Updated] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6438:
-
Reviewer: Volodymyr Vysotskyi

> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",
>   "extensions" : [ "tbl" ],
>   "delimiter" : "|"
> },
> "csv" : {
>   "type" : "text",
>   "extensions" : [ "csv" ],
>   "delimiter" : ","
> },
> "tsv" : {
>   "type" : "text",
>   "extensions" : [ "tsv" ],
>   "delimiter" : "\t"
> },
> "httpd" : {
>   "type" : "httpd",
>   "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
> },
> "parquet" : {
>   "type" : "parquet"
> 

[jira] [Updated] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6438:
-
Reviewer:   (was: Arina Ielchiieva)

> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",
>   "extensions" : [ "tbl" ],
>   "delimiter" : "|"
> },
> "csv" : {
>   "type" : "text",
>   "extensions" : [ "csv" ],
>   "delimiter" : ","
> },
> "tsv" : {
>   "type" : "text",
>   "extensions" : [ "tsv" ],
>   "delimiter" : "\t"
> },
> "httpd" : {
>   "type" : "httpd",
>   "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
> },
> "parquet" : {
>   "type" : "parquet"

[jira] [Updated] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6437:
-
Reviewer: Vitalii Diravka  (was: Arina Ielchiieva)

> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
> Fix For: 1.14.0
>
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6438:
-
Fix Version/s: 1.14.0

> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",
>   "extensions" : [ "tbl" ],
>   "delimiter" : "|"
> },
> "csv" : {
>   "type" : "text",
>   "extensions" : [ "csv" ],
>   "delimiter" : ","
> },
> "tsv" : {
>   "type" : "text",
>   "extensions" : [ "tsv" ],
>   "delimiter" : "\t"
> },
> "httpd" : {
>   "type" : "httpd",
>   "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
> },
> "parquet" : {
>   "type" : "parquet"
> },

[jira] [Commented] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488336#comment-16488336
 ] 

ASF GitHub Bot commented on DRILL-6437:
---

priteshm commented on issue #1285: DRILL-6437: Removed excess maven logging 
when downloading dependencies. This fixed Travis failures due to log overflow.
URL: https://github.com/apache/drill/pull/1285#issuecomment-391571029
 
 
   @vdiravka can you please review this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
> Fix For: 1.14.0
>
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488334#comment-16488334
 ] 

ASF GitHub Bot commented on DRILL-6437:
---

priteshm commented on issue #1285: DRILL-6437: Removed excess maven logging 
when downloading dependencies. This fixed Travis failures due to log overflow.
URL: https://github.com/apache/drill/pull/1285#issuecomment-391571029
 
 
   @parthchandra or @arina-ielchiieva can you please review this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
> Fix For: 1.14.0
>
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread Pritesh Maker (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker updated DRILL-6437:
-
Fix Version/s: 1.14.0

> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
> Fix For: 1.14.0
>
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488292#comment-16488292
 ] 

ASF GitHub Bot commented on DRILL-6435:
---

vrozov commented on issue #1286: DRILL-6435: MappingSet is stateful, so it 
can't be shared between threads
URL: https://github.com/apache/drill/pull/1286#issuecomment-391560716
 
 
   > The specific fix may be OK, but I believe it overlooks a broader point. 
The `MappingSet` is supposed to be a static association of elements used in 
code generation. This mapping is meant to be static and thus sharable.
   
   > If so, then making the objects non-static makes sense; but I wonder what 
other things will be broken since the original design appeared to be that 
`MappingSets` are static.
   
   I don't see `MappingSet` being designed to be `static` and thus sharable 
between threads. The state of the object is modified during code generation 
(expression compilation) and if there are multiple queries submitted for code 
generation at the same time, the state machine of the `MappingSet` goes wild as 
a calls to `enterConstant()` may be followed by a call to `enterConstant()` for 
a different query on a different thread (I am not even worried about 
multi-thread safety at this point).
   
   > If there is an issue, then somewhere we must have added code that alters 
these objects per-query. We might want to back up and find that use, then ask 
if it is really necessary.
   
   The code that modifies the state of the `MappingSet` object during code 
generation was always there. It is not a newly introduced code.
   
   > At the very least, perhaps we'd want to add a comment explaining why we 
need per-query modifications rather than the standard, static definitions.
   
   `static` definition for `MappingSet` is not standard. In the majority of 
places it was already declared as instance variable (not `static`).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6436) Store context and name in AbstractStoragePlugin instead of replicating fields in each StoragePlugin

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488278#comment-16488278
 ] 

ASF GitHub Bot commented on DRILL-6436:
---

Ben-Zvi closed pull request #1282: DRILL-6436: Storage Plugin to have name and 
context moved to Abstract…
URL: https://github.com/apache/drill/pull/1282
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseStoragePlugin.java
 
b/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseStoragePlugin.java
index 62f351c458..18428c6907 100644
--- 
a/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseStoragePlugin.java
+++ 
b/contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseStoragePlugin.java
@@ -38,7 +38,6 @@
 public class HBaseStoragePlugin extends AbstractStoragePlugin {
   private static final HBaseConnectionManager hbaseConnectionManager = 
HBaseConnectionManager.INSTANCE;
 
-  private final DrillbitContext context;
   private final HBaseStoragePluginConfig storeConfig;
   private final HBaseSchemaFactory schemaFactory;
   private final HBaseConnectionKey connectionKey;
@@ -47,17 +46,13 @@
 
   public HBaseStoragePlugin(HBaseStoragePluginConfig storeConfig, 
DrillbitContext context, String name)
   throws IOException {
-this.context = context;
+super(context, name);
 this.schemaFactory = new HBaseSchemaFactory(this, name);
 this.storeConfig = storeConfig;
 this.name = name;
 this.connectionKey = new HBaseConnectionKey();
   }
 
-  public DrillbitContext getContext() {
-return this.context;
-  }
-
   @Override
   public boolean supportsRead() {
 return true;
diff --git 
a/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveStoragePlugin.java
 
b/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveStoragePlugin.java
index 1ac1525c22..449a6f90b3 100644
--- 
a/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveStoragePlugin.java
+++ 
b/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveStoragePlugin.java
@@ -55,14 +55,12 @@
 
   private final HiveStoragePluginConfig config;
   private HiveSchemaFactory schemaFactory;
-  private final DrillbitContext context;
-  private final String name;
   private final HiveConf hiveConf;
 
-  public HiveStoragePlugin(HiveStoragePluginConfig config, DrillbitContext 
context, String name) throws ExecutionSetupException {
+  public HiveStoragePlugin(HiveStoragePluginConfig config, DrillbitContext 
context, String name)
+  throws ExecutionSetupException {
+super(context, name);
 this.config = config;
-this.context = context;
-this.name = name;
 this.hiveConf = createHiveConf(config.getHiveConfigOverride());
 this.schemaFactory = new HiveSchemaFactory(this, name, hiveConf);
   }
@@ -75,14 +73,6 @@ public HiveStoragePluginConfig getConfig() {
 return config;
   }
 
-  public String getName(){
-return name;
-  }
-
-  public DrillbitContext getContext() {
-return context;
-  }
-
   @Override
   public HiveScan getPhysicalScan(String userName, JSONOptions selection, 
List columns) throws IOException {
 HiveReadEntry hiveReadEntry = selection.getListWith(new ObjectMapper(), 
new TypeReference(){});
@@ -150,7 +140,7 @@ public synchronized void registerSchemas(SchemaConfig 
schemaConfig, SchemaPlus p
   logger.warn("Schema factory forced close failed, error ignored", t);
 }
 try {
-  schemaFactory = new HiveSchemaFactory(this, name, hiveConf);
+  schemaFactory = new HiveSchemaFactory(this, getName(), hiveConf);
 } catch (ExecutionSetupException e) {
   throw new DrillRuntimeException(e);
 }
diff --git 
a/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcStoragePlugin.java
 
b/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcStoragePlugin.java
index 7a58f7c13b..c38ea3b97b 100755
--- 
a/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcStoragePlugin.java
+++ 
b/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcStoragePlugin.java
@@ -77,17 +77,14 @@
 
 
   private final JdbcStorageConfig config;
-  private final DrillbitContext context;
   private final DataSource source;
-  private final String name;
   private final SqlDialect dialect;
   private final DrillJdbcConvention convention;
 
 
   public JdbcStoragePlugin(JdbcStorageConfig config, DrillbitContext context, 
String name) {
-this.context = context;
+super(context, name);
 this.config = config;
-this.name = 

[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488268#comment-16488268
 ] 

ASF GitHub Bot commented on DRILL-6435:
---

paul-rogers commented on issue #1286: DRILL-6435: MappingSet is stateful, so it 
can't be shared between threads
URL: https://github.com/apache/drill/pull/1286#issuecomment-391554391
 
 
   The specific fix may be OK, but I believe it overlooks a broader point. The 
`MappingSet` is supposed to be a static association of elements used in code 
generation. This mapping is meant to be static and thus sharable.
   
   If there is an issue, then somewhere we must have added code that alters 
these objects per-query. We might want to back up and find that use, then ask 
if it is really necessary.
   
   If so, then making the objects non-static makes sense; but I wonder what 
other things will be broken since the original design appeared to be that 
`MappingSet`s are static.
   
   At the very least, perhaps we'd want to add a comment explaining why we need 
per-query modifications rather than the standard, static definitions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-6435:
---
Comment: was deleted

(was: The specific fix may be OK, but I believe it overlooks a broader point. 
The {{MappingSet}} is supposed to be a static association of elements used in 
code generation. This mapping is meant to be static and thus sharable.

If there is an issue, then somewhere we must have added code that alters these 
objects per-query. We might want to back up and find that use, then ask if it 
is really necessary.

If so, then making the objects non-static makes sense. Perhaps we'd want to add 
a comment explaining why we need per-query modifications rather than the 
standard, static definitions.)

> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488264#comment-16488264
 ] 

Paul Rogers commented on DRILL-6435:


The specific fix may be OK, but I believe it overlooks a broader point. The 
{{MappingSet}} is supposed to be a static association of elements used in code 
generation. This mapping is meant to be static and thus sharable.

If there is an issue, then somewhere we must have added code that alters these 
objects per-query. We might want to back up and find that use, then ask if it 
is really necessary.

If so, then making the objects non-static makes sense. Perhaps we'd want to add 
a comment explaining why we need per-query modifications rather than the 
standard, static definitions.

> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488246#comment-16488246
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190438293
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -237,7 +234,7 @@ public void testRestApi() throws Exception {
   }
 
   Assert.assertTrue(listener.isDone());
-  Assert.assertEquals(1,drillbitEndpoints.size());
+  Assert.assertEquals(2, drillbitEndpoints.size());
 
 Review comment:
   K thx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> {code}
> 

[jira] [Updated] (DRILL-5990) Apache Drill /status API returns OK ('Running') even with JRE while queries will not work - make status API reflect the fact that Drill is broken on JRE or stop Drill sta

2018-05-23 Thread Kunal Khatua (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-5990:

Reviewer: Parth Chandra

> Apache Drill /status API returns OK ('Running') even with JRE while queries 
> will not work - make status API reflect the fact that Drill is broken on JRE 
> or stop Drill starting up with JRE
> ---
>
> Key: DRILL-5990
> URL: https://issues.apache.org/jira/browse/DRILL-5990
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.10.0, 1.11.0
> Environment: Docker
>Reporter: Hari Sekhon
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> If running Apache Drill on versions 1.10 / 1.11 on JRE it appears that 
> queries no longer run without JDK, but the /status monitoring API endpoint 
> does not reflect the fact the Apache Drill will not work and still returns 
> 'Running'.
> While 'Running' is technically true the process is up, it's effectively 
> broken and Apache Drill should either reflect this in /status that is is 
> broken or refuse to start up on JRE altogether.
> See this ticket for more information:
> https://github.com/HariSekhon/Dockerfiles/pull/15



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6434) Accessing Azure Data Lake Store (HDFS)

2018-05-23 Thread Kunal Khatua (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488220#comment-16488220
 ] 

Kunal Khatua commented on DRILL-6434:
-

[~sebclaude], looks like the _*adl*_ filesystem has HDFS client support since 
Hadoop 2.8.3 (?).

Apache Drill is currently tested for Hadoop 2.7.1.

You could try building Drill by changing the {{pom.xml}} here: 
[https://github.com/apache/drill/blob/master/pom.xml#L68] and trying it out. My 
guess is that it will pretty much work out of the box.

If that helps, we can probably update this Jira's title to reflect the subject 
as {color:#14892c}'Upgrade Hadoop version to 2.8.3 for Accessing Azure Data 
Lake FS ( {{adl://}} ) '{color}

> Accessing Azure Data Lake Store (HDFS)
> --
>
> Key: DRILL-6434
> URL: https://issues.apache.org/jira/browse/DRILL-6434
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Affects Versions: 1.13.0
>Reporter: SEBASTIEN CLAUDE
>Priority: Major
>
> It would be great if we can access Azure Datalake store with Drill.
>  
> As Azure ADLS is an HDFS storage it would work easily but the uri is adl:// 
> and not hdfs://, so Drill doesn't support it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6433) a process(find) that never finishes slows down apache drill

2018-05-23 Thread Kunal Khatua (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488215#comment-16488215
 ] 

Kunal Khatua commented on DRILL-6433:
-

[~mehran] , what is it that you are trying to do with the {{find}} command that 
you've issued? It's not clear how or why this is a bug in Drill.

> a process(find) that never finishes slows down apache drill
> ---
>
> Key: DRILL-6433
> URL: https://issues.apache.org/jira/browse/DRILL-6433
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: mehran
>Priority: Major
>
> IN version 13 we have a process as follows
> find -L / -name java -type f
> this process is added to system every day. and it will never finishes.
> I have 100 process in my server.
> I did not succeed to set JAVA_HOME  in drill-env.sh so i set it in /etc/bashrc
> these many processes slows down apache drill.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6182) Doc bug on parameter 'drill.exec.spill.fs'

2018-05-23 Thread Kunal Khatua (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6182:

Fix Version/s: (was: 1.13.0)
   1.14.0

> Doc bug on parameter 'drill.exec.spill.fs'
> --
>
> Key: DRILL-6182
> URL: https://issues.apache.org/jira/browse/DRILL-6182
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Satoshi Yamada
>Assignee: Bridget Bevens
>Priority: Trivial
>  Labels: documentation
> Fix For: 1.14.0
>
>
> Parameter 'drill.exe.spill.fs' should be 'drill.exec.spill.fs' (with "c" 
> after exe).**
> Observed in the documents below.
> [https://drill.apache.org/docs/start-up-options/]
> [https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6182) Doc bug on parameter 'drill.exec.spill.fs'

2018-05-23 Thread Kunal Khatua (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6182:

Fix Version/s: 1.13.0

> Doc bug on parameter 'drill.exec.spill.fs'
> --
>
> Key: DRILL-6182
> URL: https://issues.apache.org/jira/browse/DRILL-6182
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Satoshi Yamada
>Assignee: Bridget Bevens
>Priority: Trivial
>  Labels: documentation
> Fix For: 1.14.0
>
>
> Parameter 'drill.exe.spill.fs' should be 'drill.exec.spill.fs' (with "c" 
> after exe).**
> Observed in the documents below.
> [https://drill.apache.org/docs/start-up-options/]
> [https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (DRILL-5261) Expose REST endpoint in zookeeper

2018-05-23 Thread Kunal Khatua (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua closed DRILL-5261.
---
Resolution: Duplicate

Resolved by DRILL-6289

> Expose REST endpoint in zookeeper
> -
>
> Key: DRILL-5261
> URL: https://issues.apache.org/jira/browse/DRILL-5261
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Uwe L. Korn
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> It would be nice to also publish the REST API endpoint of each Drillbit in 
> the Zookeeper. This would mean that we need an additional entry in 
> {{DrillbitEndpoint}}. While I would know how to add the attribute to the 
> ProtoBuf definition and filling the attribute with the correct information, 
> I'm unsure if there is the need for some migration code to support older 
> {{DrillbitEndpoint}} implementations that don't have this attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488183#comment-16488183
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

dvjyothsna commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190425609
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -237,7 +234,7 @@ public void testRestApi() throws Exception {
   }
 
   Assert.assertTrue(listener.isDone());
-  Assert.assertEquals(1,drillbitEndpoints.size());
+  Assert.assertEquals(2, drillbitEndpoints.size());
 
 Review comment:
   Now we are shutting down only one drillbit instead of two.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at 

[jira] [Commented] (DRILL-6443) Search feature for profiles is available only for running OR completed queries, but not both

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488173#comment-16488173
 ] 

ASF GitHub Bot commented on DRILL-6443:
---

kkhatua commented on issue #1287: DRILL-6443: Enable Search for both running 
AND completed queries
URL: https://github.com/apache/drill/pull/1287#issuecomment-391527798
 
 
   @arina-ielchiieva  since you had reviewed the original PR #1029 for 
[DRILL-5867](https://issues.apache.org/jira/browse/DRILL-5867), could you do a 
review of this small change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Search feature for profiles is available only for running OR completed 
> queries, but not both
> 
>
> Key: DRILL-6443
> URL: https://issues.apache.org/jira/browse/DRILL-6443
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.13.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running a query in Drill, the {{/profiles}} page will show the search 
> (and pagination) capabilities only for the top most visible table (i.e. 
> _Running Queries_ ).
> The _Completed Queries_ table will show the search feature only when there 
> are no running queries. This is because the backend uses a generalized 
> freemarker macro to define the seach capabilities for the tables being 
> rendered. With running queries, both, _running_ and _completed queries_ 
> tables have the same element ID, resulting in the search capability only 
> being applied to the first table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6443) Search feature for profiles is available only for running OR completed queries, but not both

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488171#comment-16488171
 ] 

ASF GitHub Bot commented on DRILL-6443:
---

kkhatua commented on issue #1287: DRILL-6443: Enable Search for both running 
AND completed queries
URL: https://github.com/apache/drill/pull/1287#issuecomment-391527560
 
 
   This is how the search bars appear with running queries:
   
![image](https://user-images.githubusercontent.com/4335237/40455627-b7cfabb4-5ea2-11e8-8f5e-8e8dbdd21247.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Search feature for profiles is available only for running OR completed 
> queries, but not both
> 
>
> Key: DRILL-6443
> URL: https://issues.apache.org/jira/browse/DRILL-6443
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.13.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running a query in Drill, the {{/profiles}} page will show the search 
> (and pagination) capabilities only for the top most visible table (i.e. 
> _Running Queries_ ).
> The _Completed Queries_ table will show the search feature only when there 
> are no running queries. This is because the backend uses a generalized 
> freemarker macro to define the seach capabilities for the tables being 
> rendered. With running queries, both, _running_ and _completed queries_ 
> tables have the same element ID, resulting in the search capability only 
> being applied to the first table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6443) Search feature for profiles is available only for running OR completed queries, but not both

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488169#comment-16488169
 ] 

ASF GitHub Bot commented on DRILL-6443:
---

kkhatua opened a new pull request #1287: DRILL-6443: Enable Search for both 
running AND completed queries
URL: https://github.com/apache/drill/pull/1287
 
 
   When running a query in Drill, the `/profiles` page will show the search 
(and pagination) capabilities only for the top most visible table (i.e. Running 
Queries ).
   
   The _Completed Queries_ table will show the search feature only when there 
are no _Running Queries_. This is because the backend uses a generalized 
freemarker macro to define the seach capabilities for the tables being 
rendered. With running queries, both, _Running_ and _Completed_ queries tables 
have the same element ID, resulting in the search capability only being applied 
to the first table.
   
   This modifies the Freemarker macro to take an additional argument for 
distinguishing between _Running_ and _Completed_ _Queries_ tables.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Search feature for profiles is available only for running OR completed 
> queries, but not both
> 
>
> Key: DRILL-6443
> URL: https://issues.apache.org/jira/browse/DRILL-6443
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.13.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running a query in Drill, the {{/profiles}} page will show the search 
> (and pagination) capabilities only for the top most visible table (i.e. 
> _Running Queries_ ).
> The _Completed Queries_ table will show the search feature only when there 
> are no running queries. This is because the backend uses a generalized 
> freemarker macro to define the seach capabilities for the tables being 
> rendered. With running queries, both, _running_ and _completed queries_ 
> tables have the same element ID, resulting in the search capability only 
> being applied to the first table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6443) Search feature for profiles is available only for running OR completed queries, but not both

2018-05-23 Thread Kunal Khatua (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-6443:

Reviewer: Arina Ielchiieva

> Search feature for profiles is available only for running OR completed 
> queries, but not both
> 
>
> Key: DRILL-6443
> URL: https://issues.apache.org/jira/browse/DRILL-6443
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.13.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When running a query in Drill, the {{/profiles}} page will show the search 
> (and pagination) capabilities only for the top most visible table (i.e. 
> _Running Queries_ ).
> The _Completed Queries_ table will show the search feature only when there 
> are no running queries. This is because the backend uses a generalized 
> freemarker macro to define the seach capabilities for the tables being 
> rendered. With running queries, both, _running_ and _completed queries_ 
> tables have the same element ID, resulting in the search capability only 
> being applied to the first table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488152#comment-16488152
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190422687
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -163,7 +163,7 @@ public void run() {
 }
   }).start();
 
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
 
 Review comment:
   I don't think we need this sleep here since we are already checking the 
termination condition in the loop below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488153#comment-16488153
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190422805
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -115,7 +115,7 @@ public void run() {
 }
   }).start();
 
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
 
 Review comment:
   Don't need this sleep see other comments below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> {code}
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488148#comment-16488148
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190420698
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -237,7 +234,7 @@ public void testRestApi() throws Exception {
   }
 
   Assert.assertTrue(listener.isDone());
-  Assert.assertEquals(1,drillbitEndpoints.size());
+  Assert.assertEquals(2, drillbitEndpoints.size());
 
 Review comment:
   Why did we have to change 1 to 2 here? Wasn't this test already passing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488154#comment-16488154
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190422417
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -201,24 +201,21 @@ public void testRestApi() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = cluster.clientFixture()) {
   Drillbit drillbit = cluster.drillbit("db1");
-  int port = 
drillbit.getContext().getConfig().getInt("drill.exec.http.port");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int port = drillbit.getWebServerPort();
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   listener =  client.queryBuilder().sql(sql).futureSummary();
   Thread.sleep(6);
-  while( port < 8049) {
-URL url = new URL("http://localhost:"+port+"/gracefulShutdown;);
-HttpURLConnection conn = (HttpURLConnection) url.openConnection();
-conn.setRequestMethod("POST");
-if (conn.getResponseCode() != 200) {
-  throw new RuntimeException("Failed : HTTP error code : "
-  + conn.getResponseCode());
-}
-port++;
+  URL url = new URL("http://localhost:; + port + "/gracefulShutdown");
+  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+  conn.setRequestMethod("POST");
+  if (conn.getResponseCode() != 200) {
+throw new RuntimeException("Failed : HTTP error code : "
++ conn.getResponseCode());
   }
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
 
 Review comment:
   In fact since we are doing the same thing in the other test could you move 
this code into a helper method?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488150#comment-16488150
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190421300
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -252,44 +249,40 @@ public void testRestApiShutdown() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = cluster.clientFixture()) {
   Drillbit drillbit = cluster.drillbit("db1");
-  int port = 
drillbit.getContext().getConfig().getInt("drill.exec.http.port");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int port = drillbit.getWebServerPort();
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   listener =  client.queryBuilder().sql(sql).futureSummary();
-  Thread.sleep(1);
-
-  while( port < 8048) {
-URL url = new URL("http://localhost:"+port+"/shutdown;);
-HttpURLConnection conn = (HttpURLConnection) url.openConnection();
-conn.setRequestMethod("POST");
-if (conn.getResponseCode() != 200) {
-  throw new RuntimeException("Failed : HTTP error code : "
-  + conn.getResponseCode());
+  while (true) {
+if (listener.isDone()) {
+  break;
 }
-port++;
-  }
-
-  Thread.sleep(grace_period);
-
-  Collection drillbitEndpoints = cluster.drillbit()
-  .getContext()
-  .getClusterCoordinator()
-  .getAvailableEndpoints();
 
+Thread.sleep(100L);
+  }
+  URL url = new URL("http://localhost:; + port + "/shutdown");
+  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+  conn.setRequestMethod("POST");
+  if (conn.getResponseCode() != 200) {
+throw new RuntimeException("Failed : HTTP error code : "
++ conn.getResponseCode());
+  }
+  Thread.sleep(gracePeriod);
   long currentTime = System.currentTimeMillis();
   long stopTime = currentTime + WAIT_TIMEOUT_MS;
 
-  while (currentTime < stopTime) {
-if (listener.isDone() && drillbitEndpoints.size() == 2) {
+  while(currentTime < stopTime) {
+Collection drillbitEndpoints = cluster.drillbit()
+.getContext()
+.getClusterCoordinator()
+.getAvailableEndpoints();
+if (drillbitEndpoints.size() == 2) {
   return;
 }
-
-Thread.sleep(100L);
 
 Review comment:
   We should keep this sleep. Otherwise we'll hog all the cpu time while we 
wait for the drillbit to terminate.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488155#comment-16488155
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190422532
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -252,44 +249,40 @@ public void testRestApiShutdown() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = cluster.clientFixture()) {
   Drillbit drillbit = cluster.drillbit("db1");
-  int port = 
drillbit.getContext().getConfig().getInt("drill.exec.http.port");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int port = drillbit.getWebServerPort();
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   listener =  client.queryBuilder().sql(sql).futureSummary();
-  Thread.sleep(1);
-
-  while( port < 8048) {
-URL url = new URL("http://localhost:"+port+"/shutdown;);
-HttpURLConnection conn = (HttpURLConnection) url.openConnection();
-conn.setRequestMethod("POST");
-if (conn.getResponseCode() != 200) {
-  throw new RuntimeException("Failed : HTTP error code : "
-  + conn.getResponseCode());
+  while (true) {
+if (listener.isDone()) {
+  break;
 }
-port++;
-  }
-
-  Thread.sleep(grace_period);
-
-  Collection drillbitEndpoints = cluster.drillbit()
-  .getContext()
-  .getClusterCoordinator()
-  .getAvailableEndpoints();
 
+Thread.sleep(100L);
+  }
+  URL url = new URL("http://localhost:; + port + "/shutdown");
+  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+  conn.setRequestMethod("POST");
+  if (conn.getResponseCode() != 200) {
+throw new RuntimeException("Failed : HTTP error code : "
++ conn.getResponseCode());
+  }
+  Thread.sleep(gracePeriod);
 
 Review comment:
   Since we are already waiting for a termination condition in the loop below, 
I don't think we need to have this sleep here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488149#comment-16488149
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190420401
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -201,24 +201,21 @@ public void testRestApi() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = cluster.clientFixture()) {
   Drillbit drillbit = cluster.drillbit("db1");
-  int port = 
drillbit.getContext().getConfig().getInt("drill.exec.http.port");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int port = drillbit.getWebServerPort();
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   listener =  client.queryBuilder().sql(sql).futureSummary();
   Thread.sleep(6);
 
 Review comment:
   Why do we need to sleep for 60 seconds here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488151#comment-16488151
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

ilooner commented on a change in pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#discussion_r190421685
 
 

 ##
 File path: 
exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
 ##
 @@ -201,24 +201,21 @@ public void testRestApi() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = cluster.clientFixture()) {
   Drillbit drillbit = cluster.drillbit("db1");
-  int port = 
drillbit.getContext().getConfig().getInt("drill.exec.http.port");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int port = drillbit.getWebServerPort();
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   listener =  client.queryBuilder().sql(sql).futureSummary();
   Thread.sleep(6);
-  while( port < 8049) {
-URL url = new URL("http://localhost:"+port+"/gracefulShutdown;);
-HttpURLConnection conn = (HttpURLConnection) url.openConnection();
-conn.setRequestMethod("POST");
-if (conn.getResponseCode() != 200) {
-  throw new RuntimeException("Failed : HTTP error code : "
-  + conn.getResponseCode());
-}
-port++;
+  URL url = new URL("http://localhost:; + port + "/gracefulShutdown");
+  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+  conn.setRequestMethod("POST");
+  if (conn.getResponseCode() != 200) {
+throw new RuntimeException("Failed : HTTP error code : "
++ conn.getResponseCode());
   }
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
 
 Review comment:
   I don't think we need this sleep here. We can change the loop below to check 
for the termination condition.
   
   ```
   while (currentTime < stopTime) {
 Collection drillbitEndpoints = cluster.drillbit()
  .getContext()
  .getClusterCoordinator()
  .getOnlineEndPoints();
 if (drillbitEndpoints.size() == 2) {
   break;
 }
   
 Thread.sleep(100L);
 currentTime = System.currentTimeMillis();
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> 

[jira] [Updated] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread Vlad Rozov (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vlad Rozov updated DRILL-6435:
--
Reviewer: Vitalii Diravka

> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488137#comment-16488137
 ] 

ASF GitHub Bot commented on DRILL-6435:
---

vrozov opened a new pull request #1286: DRILL-6435: MappingSet is stateful, so 
it can't be shared between threads
URL: https://github.com/apache/drill/pull/1286
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488138#comment-16488138
 ] 

ASF GitHub Bot commented on DRILL-6435:
---

vrozov commented on issue #1286: DRILL-6435: MappingSet is stateful, so it 
can't be shared between threads
URL: https://github.com/apache/drill/pull/1286#issuecomment-391521051
 
 
   @vdiravka Please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6443) Search feature for profiles is available only for running OR completed queries, but not both

2018-05-23 Thread Kunal Khatua (JIRA)
Kunal Khatua created DRILL-6443:
---

 Summary: Search feature for profiles is available only for running 
OR completed queries, but not both
 Key: DRILL-6443
 URL: https://issues.apache.org/jira/browse/DRILL-6443
 Project: Apache Drill
  Issue Type: Bug
  Components: Web Server
Affects Versions: 1.13.0
Reporter: Kunal Khatua
Assignee: Kunal Khatua
 Fix For: 1.14.0


When running a query in Drill, the {{/profiles}} page will show the search (and 
pagination) capabilities only for the top most visible table (i.e. _Running 
Queries_ ).

The _Completed Queries_ table will show the search feature only when there are 
no running queries. This is because the backend uses a generalized freemarker 
macro to define the seach capabilities for the tables being rendered. With 
running queries, both, _running_ and _completed queries_ tables have the same 
element ID, resulting in the search capability only being applied to the first 
table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread Timothy Farkas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Farkas updated DRILL-6437:
--
Reviewer: Arina Ielchiieva

> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488053#comment-16488053
 ] 

ASF GitHub Bot commented on DRILL-6437:
---

ilooner commented on issue #1285: DRILL-6437: Removed excess maven logging when 
downloading dependencies. This fixed Travis failures due to log overflow.
URL: https://github.com/apache/drill/pull/1285#issuecomment-391499218
 
 
   @arina-ielchiieva 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6437) Travis Fails Because Logs Are Flooded.

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488052#comment-16488052
 ] 

ASF GitHub Bot commented on DRILL-6437:
---

ilooner opened a new pull request #1285: DRILL-6437: Removed excess maven 
logging when downloading dependencies. This fixed Travis failures due to log 
overflow.
URL: https://github.com/apache/drill/pull/1285
 
 
   The fix was to change the log level maven when downloading artifacts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Travis Fails Because Logs Are Flooded.
> --
>
> Key: DRILL-6437
> URL: https://issues.apache.org/jira/browse/DRILL-6437
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.14.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Critical
>
> The Travis logs are flooded when downloading mysql.
> {code}
> Downloading from central: 
> http://repo.maven.apache.org/maven2/com/jcabi/mysql-dist/5.6.14/mysql-dist-5.6.14-linux-amd64.zip
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> Progress (1): 0.1/325 MB
> {code}
> And the Travis build fails with
> {code}
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> {code}
> This doesn't happen in the core apache travis builds because dependencies are 
> chached on Travis. However, when running a private Travis build that doesn't 
> have dependencies cached, we have to redownload mysql and we run into this 
> problem.
> Example Travis build with the issue: 
> https://travis-ci.org/ilooner/drill/builds/382364378



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6419) E2E Integration test for Lateral

2018-05-23 Thread Sorabh Hamirwasia (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6419:
-
Labels: ready-to-commit  (was: )

> E2E Integration test for Lateral
> ---
>
> Key: DRILL-6419
> URL: https://issues.apache.org/jira/browse/DRILL-6419
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-6419) E2E Integration test for Lateral

2018-05-23 Thread Sorabh Hamirwasia (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia resolved DRILL-6419.
--
Resolution: Fixed
  Reviewer: Parth Chandra

Merged with commit id: 17366d3f44451fdfb0b1b2f5f6e55026346aae9b

> E2E Integration test for Lateral
> ---
>
> Key: DRILL-6419
> URL: https://issues.apache.org/jira/browse/DRILL-6419
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-6440) Fix ignored unit tests in unnest

2018-05-23 Thread Sorabh Hamirwasia (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia resolved DRILL-6440.
--
   Resolution: Fixed
 Reviewer: Sorabh Hamirwasia
Fix Version/s: 1.14.0

Merged with commit id:  9b3be792678afeb5f0c87ba4c4a39afe97f20e98

> Fix ignored unit tests in unnest
> 
>
> Key: DRILL-6440
> URL: https://issues.apache.org/jira/browse/DRILL-6440
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>Assignee: Parth Chandra
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6440) Fix ignored unit tests in unnest

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488033#comment-16488033
 ] 

ASF GitHub Bot commented on DRILL-6440:
---

sohami closed pull request #1283: DRILL-6440: Unnest unit tests and fixes for 
stats
URL: https://github.com/apache/drill/pull/1283
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
 
b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
index 57a0adeb7c..e985c4defe 100644
--- 
a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
+++ 
b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
@@ -207,6 +207,9 @@ public IterOutcome innerNext() {
   } finally {
 stats.stopSetup();
   }
+  // since we never called next on an upstream operator, incoming stats are
+  // not updated. update input stats explicitly.
+  stats.batchReceived(0, incoming.getRecordCount(), true);
   return IterOutcome.OK_NEW_SCHEMA;
 } else {
   assert state != BatchState.FIRST : "First batch should be OK_NEW_SCHEMA";
@@ -223,11 +226,13 @@ public IterOutcome innerNext() {
   context.getExecutorState().fail(ex);
   return IterOutcome.STOP;
 }
+stats.batchReceived(0, incoming.getRecordCount(), true);
 return OK_NEW_SCHEMA;
   }
   if (lateral.getRecordIndex() == 0) {
 unnest.resetGroupIndex();
   }
+  stats.batchReceived(0, incoming.getRecordCount(), false);
   return doWork();
 }
 
@@ -348,8 +353,7 @@ protected IterOutcome doWork() {
 recordCount = 0;
 final List transfers = Lists.newArrayList();
 
-final FieldReference fieldReference =
-new FieldReference(popConfig.getColumn());
+final FieldReference fieldReference = new 
FieldReference(popConfig.getColumn());
 
 final TransferPair transferPair = 
getUnnestFieldTransferPair(fieldReference);
 
diff --git 
a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestCorrectness.java
 
b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestCorrectness.java
index 137966ba33..c04bff7753 100644
--- 
a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestCorrectness.java
+++ 
b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestCorrectness.java
@@ -615,7 +615,8 @@ public void testUnnestNonArrayColumn() {
*]
*  }
*
-   * @see TestResultSetLoaderMapArray TestResultSetLoaderMapArray for similar 
schema and data
+   * @see 
org.apache.drill.exec.physical.rowSet.impl.TestResultSetLoaderMapArray 
TestResultSetLoaderMapArray for
+   * similar schema and data
* @return TupleMetadata corresponding to the schema
*/
   private TupleMetadata getRepeatedMapSchema() {
diff --git 
a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestWithLateralCorrectness.java
 
b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestWithLateralCorrectness.java
index 9318c516b9..f281964fb4 100644
--- 
a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestWithLateralCorrectness.java
+++ 
b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/unnest/TestUnnestWithLateralCorrectness.java
@@ -48,7 +48,6 @@
 import org.apache.drill.test.rowSet.schema.SchemaBuilder;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -147,7 +146,6 @@ public void testUnnestVarWidthColumn() {
 
   }
 
-  @Ignore("RecordBatchSizer throws Exception in RecordBatchSizer.expandMap")
   @Test
   public void testUnnestMapColumn() {
 
@@ -297,15 +295,21 @@ public void testUnnestSchemaChange() {
 
   }
 
-  @Ignore ("Batch limits need to be sync'd with tthe record batch sizer. Fix 
once the calulations are stabilized")
   @Test
   public void testUnnestLimitBatchSize() {
 
-final int limitedOutputBatchSize = 1024;
-final int inputBatchSize = 1024+1;
-final int limitedOutputBatchSizeBytes = 1024*(4 + 4 + 4 * inputBatchSize); 
// num rows * (size of int + size of
-   
// int + size of int * num entries in
-   
// array)
+final int limitedOutputBatchSize = 127;
+final int inputBatchSize = limitedOutputBatchSize + 1;
+

[jira] [Commented] (DRILL-5270) Improve loading of profiles listing in the WebUI

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488008#comment-16488008
 ] 

ASF GitHub Bot commented on DRILL-5270:
---

kkhatua commented on issue #1250: DRILL-5270: Improve loading of profiles 
listing in the WebUI
URL: https://github.com/apache/drill/pull/1250#issuecomment-391489764
 
 
   @arina-ielchiieva I've made the following changes: 
   1. Refactored to introduce an Archiver
   2. Allow for cache to only apply to WebServer 
   3. For non-webserver request, like SysTables, support for recursive listing. 
This is because, while archiving speeds up performance for WebServers, 
SysTables would need access to archived profiles for analytics.
   4. Added tests for the ProfileSet cache
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve loading of profiles listing in the WebUI
> 
>
> Key: DRILL-5270
> URL: https://issues.apache.org/jira/browse/DRILL-5270
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.9.0
>Reporter: Kunal Khatua
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.14.0
>
>
> Currently, as the number of profiles increase, we reload the same list of 
> profiles from the FS.
> An ideal improvement would be to detect if there are any new profiles and 
> only reload from the disk then. Otherwise, a cached list is sufficient.
> For a directory of 280K profiles, the load time is close to 6 seconds on a 32 
> core server. With the caching, we can get it down to as much as a few 
> milliseconds.
> To render the cache as invalid, we inspect the last modified time of the 
> directory to confirm whether a reload is needed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487826#comment-16487826
 ] 

ASF GitHub Bot commented on DRILL-6438:
---

ilooner opened a new pull request #1284: DRILL-6438: Remove excess logging form 
some tests.
URL: https://github.com/apache/drill/pull/1284
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",
>   "extensions" : [ "tbl" ],
>   "delimiter" : "|"
> },
>

[jira] [Updated] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread Timothy Farkas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Farkas updated DRILL-6438:
--
Reviewer: Arina Ielchiieva

> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",
>   "extensions" : [ "tbl" ],
>   "delimiter" : "|"
> },
> "csv" : {
>   "type" : "text",
>   "extensions" : [ "csv" ],
>   "delimiter" : ","
> },
> "tsv" : {
>   "type" : "text",
>   "extensions" : [ "tsv" ],
>   "delimiter" : "\t"
> },
> "httpd" : {
>   "type" : "httpd",
>   "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
> },
> "parquet" : {
>   "type" : "parquet"
> },
> "json" : {
>

[jira] [Commented] (DRILL-6438) Remove excess logging from tests

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487827#comment-16487827
 ] 

ASF GitHub Bot commented on DRILL-6438:
---

ilooner commented on issue #1284: DRILL-6438: Remove excess logging form some 
tests.
URL: https://github.com/apache/drill/pull/1284#issuecomment-391454738
 
 
   @arina-ielchiieva 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Remove excess logging from tests
> 
>
> Key: DRILL-6438
> URL: https://issues.apache.org/jira/browse/DRILL-6438
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
>
> TestLocalExchange and TestLoad have this issue.
> See example
> {code}
> Running 
> org.apache.drill.exec.physical.impl.TestLocalExchange#testGroupByMultiFields
> Plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_mux_exchange",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.enable_demux_exchange",
>   "bool_val" : false,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.slice_target",
>   "num_val" : 1,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "fs-scan",
> "@id" : 196611,
> "userName" : "travis",
> "files" : [ 
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/6.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/9.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/3.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/1.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/2.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/7.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/0.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/5.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/4.json",
>  
> "file:/home/travis/build/apache/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.TestLocalExchange/root/empTable/8.json"
>  ],
> "storage" : {
>   "type" : "file",
>   "enabled" : true,
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
> "root" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "tmp" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/dfsTestTmp/1527026062606-0",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> },
> "default" : {
>   "location" : 
> "/home/travis/build/apache/drill/exec/java-exec/./target/org.apache.drill.exec.physical.impl.TestLocalExchange/root",
>   "writable" : true,
>   "defaultInputFormat" : null,
>   "allowAccessOutsideWorkspace" : false
> }
>   },
>   "formats" : {
> "psv" : {
>   "type" : "text",
>   "extensions" : [ "tbl" ],
>   

[jira] [Commented] (DRILL-6436) Store context and name in AbstractStoragePlugin instead of replicating fields in each StoragePlugin

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487820#comment-16487820
 ] 

ASF GitHub Bot commented on DRILL-6436:
---

ilooner commented on issue #1282: DRILL-6436: Storage Plugin to have name and 
context moved to Abstract…
URL: https://github.com/apache/drill/pull/1282#issuecomment-391453354
 
 
   @vrozov @paul-rogers 
   
   Made context and name private final fields. Also removed redundant 
getContext() methods. And fixed javadoc errors in AbstractStoragePlugin.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Store context and name in AbstractStoragePlugin instead of replicating fields 
> in each StoragePlugin
> ---
>
> Key: DRILL-6436
> URL: https://issues.apache.org/jira/browse/DRILL-6436
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>Priority: Major
> Fix For: 1.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487786#comment-16487786
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

dvjyothsna opened a new pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281
 
 
   @ilooner Please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> {code}
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)  Time 
> elapsed: 180.028 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487785#comment-16487785
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

dvjyothsna commented on issue #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#issuecomment-391446899
 
 
   Not Able to reopen PR #1263. So Reopening this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> {code}
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)  Time 
> elapsed: 180.028 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487781#comment-16487781
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

dvjyothsna commented on issue #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281#issuecomment-391446528
 
 
   Closing this and reopening PR #1263


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unit test TestGracefulShutdown.testRestApiShutdown times out
> 
>
> Key: DRILL-6415
> URL: https://issues.apache.org/jira/browse/DRILL-6415
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Reporter: Abhishek Girish
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.14.0
>
>
> {code}
> 16:03:40.415 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -18.3 KiB(72.9 KiB), h: -335.3 MiB(1.3 GiB), nh: 1.1 MiB(335.9 MiB)): 
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method) ~[na:1.8.0_161]
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> ~[na:1.8.0_161]
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)
>  ~[na:1.8.0_161]
>   at 
> org.apache.drill.exec.work.WorkManager.waitToExit(WorkManager.java:203) 
> ~[classes/:na]
>   at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:242) 
> ~[classes/:na]
>   at 
> org.apache.drill.test.ClusterFixture.safeClose(ClusterFixture.java:454) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at org.apache.drill.test.ClusterFixture.close(ClusterFixture.java:405) 
> ~[test-classes/:1.14.0-SNAPSHOT]
>   at 
> org.apache.drill.test.TestGracefulShutdown.testRestApiShutdown(TestGracefulShutdown.java:294)
>  ~[test-classes/:1.14.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_161]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_161]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_161]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  ~[junit-4.12.jar:4.12]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.executeTestMethod(JUnit4TestRunnerDecorator.java:154)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:70)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> mockit.integration.junit4.internal.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:34)
>  ~[jmockit-1.39.jar:1.39]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  ~[junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  ~[junit-4.12.jar:4.12]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_161]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
> {code}
> {code}
> testRestApiShutdown(org.apache.drill.test.TestGracefulShutdown)  Time 
> elapsed: 180.028 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 18 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> 

[jira] [Commented] (DRILL-6415) Unit test TestGracefulShutdown.testRestApiShutdown times out

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487782#comment-16487782
 ] 

ASF GitHub Bot commented on DRILL-6415:
---

dvjyothsna closed pull request #1281: DRILL-6415: Fixed 
TestGracefulShutdown.TestRestApi test from timing out
URL: https://github.com/apache/drill/pull/1281
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java 
b/exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
index ccd65e41fe..52f9292428 100644
--- 
a/exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
+++ 
b/exec/java-exec/src/test/java/org/apache/drill/test/TestGracefulShutdown.java
@@ -103,7 +103,7 @@ public void testOnlineEndPoints() throws  Exception {
 
   Drillbit drillbit = cluster.drillbit("db2");
   DrillbitEndpoint drillbitEndpoint =  
drillbit.getRegistrationHandle().getEndPoint();
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
 
   new Thread(new Runnable() {
 public void run() {
@@ -115,7 +115,7 @@ public void run() {
 }
   }).start();
 
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
 
   long currentTime = System.currentTimeMillis();
   long stopTime = currentTime + WAIT_TIMEOUT_MS;
@@ -151,7 +151,7 @@ public void testStateChange() throws  Exception {
 
 try (ClusterFixture cluster = builder.build()) {
   Drillbit drillbit = cluster.drillbit("db2");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   DrillbitEndpoint drillbitEndpoint =  
drillbit.getRegistrationHandle().getEndPoint();
   new Thread(new Runnable() {
 public void run() {
@@ -163,7 +163,7 @@ public void run() {
 }
   }).start();
 
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
 
   long currentTime = System.currentTimeMillis();
   long stopTime = currentTime + WAIT_TIMEOUT_MS;
@@ -201,24 +201,21 @@ public void testRestApi() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = cluster.clientFixture()) {
   Drillbit drillbit = cluster.drillbit("db1");
-  int port = 
drillbit.getContext().getConfig().getInt("drill.exec.http.port");
-  int grace_period = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
+  int port = drillbit.getWebServerPort();
+  int gracePeriod = 
drillbit.getContext().getConfig().getInt(ExecConstants.GRACE_PERIOD);
   listener =  client.queryBuilder().sql(sql).futureSummary();
   Thread.sleep(6);
-  while( port < 8049) {
-URL url = new URL("http://localhost:"+port+"/gracefulShutdown;);
-HttpURLConnection conn = (HttpURLConnection) url.openConnection();
-conn.setRequestMethod("POST");
-if (conn.getResponseCode() != 200) {
-  throw new RuntimeException("Failed : HTTP error code : "
-  + conn.getResponseCode());
-}
-port++;
+  URL url = new URL("http://localhost:; + port + "/gracefulShutdown");
+  HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+  conn.setRequestMethod("POST");
+  if (conn.getResponseCode() != 200) {
+throw new RuntimeException("Failed : HTTP error code : "
++ conn.getResponseCode());
   }
-  Thread.sleep(grace_period);
+  Thread.sleep(gracePeriod);
   Collection drillbitEndpoints = cluster.drillbit()
   .getContext()
   .getClusterCoordinator()
@@ -237,7 +234,7 @@ public void testRestApi() throws Exception {
   }
 
   Assert.assertTrue(listener.isDone());
-  Assert.assertEquals(1,drillbitEndpoints.size());
+  Assert.assertEquals(2, drillbitEndpoints.size());
 }
   }
 
@@ -252,44 +249,40 @@ public void testRestApiShutdown() throws Exception {
 builder = enableWebServer(builder);
 QueryBuilder.QuerySummaryFuture listener;
 final String sql = "select * from dfs.root.`.`";
-try ( ClusterFixture cluster = builder.build();
+try (ClusterFixture cluster = builder.build();
   final ClientFixture client = 

[jira] [Updated] (DRILL-6440) Fix ignored unit tests in unnest

2018-05-23 Thread Sorabh Hamirwasia (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6440:
-
Labels: ready-to-commit  (was: )

> Fix ignored unit tests in unnest
> 
>
> Key: DRILL-6440
> URL: https://issues.apache.org/jira/browse/DRILL-6440
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>Assignee: Parth Chandra
>Priority: Major
>  Labels: ready-to-commit
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6440) Fix ignored unit tests in unnest

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487774#comment-16487774
 ] 

ASF GitHub Bot commented on DRILL-6440:
---

sohami commented on issue #1283: DRILL-6440: Unnest unit tests and fixes for 
stats
URL: https://github.com/apache/drill/pull/1283#issuecomment-391444730
 
 
   +1 LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix ignored unit tests in unnest
> 
>
> Key: DRILL-6440
> URL: https://issues.apache.org/jira/browse/DRILL-6440
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>Assignee: Parth Chandra
>Priority: Major
>  Labels: ready-to-commit
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487680#comment-16487680
 ] 

ASF GitHub Bot commented on DRILL-5977:
---

akumarb2010 commented on issue #1272: DRILL-5977: Filter Pushdown in 
Drill-Kafka plugin
URL: https://github.com/apache/drill/pull/1272#issuecomment-391427723
 
 
   @aravi5  Thanks for providing all the details. Please find my comments as 
below.
   
   >> If the predicates are only on kafkaMsgOffset (for example SELECT * FROM 
kafka.LogEventStream WHERE kafkaMsgOffset >= 1000 AND kafkaMsgOffset < 2000), 
this will apply the pushdown to ALL partitions within a topic. If there is a 
partition where such offsets do not exist (either the offsets have expired or 
messages for those offsets are yet to be produced), then such partition will 
not be scanned.
   
   In Kafka, `offset` scope itself is per partition. I am unable to find any 
use case, where we can take the range of offsets and apply on all partitions. 
In most of the scenario's they may not be valid offsets.
   
   IMHO, we should only apply predicate pushdown where we have exact scan 
specs. 
   
   For example, in case below scenario, we can apply the predicates without any 
issues.
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 1 AND 
kafkaMsgOffset > 1000 AND kafkaMsgOffset < 2000) OR (kafkaPartitionId = 2 AND 
kafkaMsgOffset > 4000)
   
   ```
   
   And this way, we can use this predicate pushdown feature for external 
checkpointing mechanism.
   
   And coming to time stamps, my point is, in case of invalid *partitionId*, 
query might block indefinitely with this feature. Where as, without this 
feature, we will return empty results.
   
   It will be great if you can add few test cases with invalid partitions like 
below (Assuming partition 100 doesn't exist)
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 100 AND 
kafkaMsgTimestamp > 1527092007199)
   
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487673#comment-16487673
 ] 

ASF GitHub Bot commented on DRILL-5977:
---

akumarb2010 commented on issue #1272: DRILL-5977: Filter Pushdown in 
Drill-Kafka plugin
URL: https://github.com/apache/drill/pull/1272#issuecomment-391427723
 
 
   @aravi5  Thanks for providing all the details. Please find my comments as 
below.
   
   >> If the predicates are only on kafkaMsgOffset (for example SELECT * FROM 
kafka.LogEventStream WHERE kafkaMsgOffset >= 1000 AND kafkaMsgOffset < 2000), 
this will apply the pushdown to ALL partitions within a topic. If there is a 
partition where such offsets do not exist (either the offsets have expired or 
messages for those offsets are yet to be produced), then such partition will 
not be scanned.
   
   In Kafka, `offset` scope itself is per partition. I am unable to find any 
use case, where we can take the range of offsets and apply on all partitions. 
In most of the scenario's they may not be valid offsets.
   
   IMHO, we should only apply predicate pushdown where we have exact scan 
specs. 
   
   For example, in case below scenario, we can apply the predicates without any 
issues.
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 1 AND 
kafkaMsgOffset > 1000 AND kafkaMsgOffset < 2000) OR (kafkaPartitionId = 2 AND 
kafkaMsgOffset > 4000)
   
   ```
   
   And this way, we can use this predicate pushdown feature for external 
checkpointing mechanism.
   
   And coming to time stamps, my point in case of invalid partitionId, query 
might block indefinitely with this feature. Where as in other case, we will 
return empty results.
   
   It will be great if you can add few test cases with invalid partitions like 
below (Assuming partition 100 doesn't exist)
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 100 AND 
kafkaMsgTimestamp > 1527092007199)
   
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487671#comment-16487671
 ] 

ASF GitHub Bot commented on DRILL-5977:
---

akumarb2010 commented on issue #1272: DRILL-5977: Filter Pushdown in 
Drill-Kafka plugin
URL: https://github.com/apache/drill/pull/1272#issuecomment-391427723
 
 
   @aravi5  Thanks for providing all the details. Please find my comments as 
below.
   
   >> If the predicates are only on kafkaMsgOffset (for example SELECT * FROM 
kafka.LogEventStream WHERE kafkaMsgOffset >= 1000 AND kafkaMsgOffset < 2000), 
this will apply the pushdown to ALL partitions within a topic. If there is a 
partition where such offsets do not exist (either the offsets have expired or 
messages for those offsets are yet to be produced), then such partition will 
not be scanned.
   
   In Kafka, `offset` scope itself is per partition. I am unable to find any 
use case, we can take the range of offsets and apply on all partitions. In most 
of the scenario's they may not be valid offsets.
   
   IMHO, we should only apply predicate pushdown where have exact scan specs. 
   
   For example, in case below cases we can apply the predicates without any 
issues.
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 1 AND 
kafkaMsgOffset > 1000 AND kafkaMsgOffset < 2000) OR (kafkaPartitionId = 2 AND 
kafkaMsgOffset > 4000)
   
   ```
   
   And this way, we can use this predicates feature for external checkpointing 
mechanism.
   
   And coming to time stamps, my point in case of invalid partitionId, query 
might block indefinitely with this feature. Where as in other case, we will 
return empty results.
   
   It will be great if you can add few test cases with invalid partitions like 
below (Assuming partition 100 doesn't exist)
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 100 AND 
kafkaMsgTimestamp > 1527092007199)
   
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487592#comment-16487592
 ] 

ASF GitHub Bot commented on DRILL-5977:
---

aravi5 commented on issue #1272: DRILL-5977: Filter Pushdown in Drill-Kafka 
plugin
URL: https://github.com/apache/drill/pull/1272#issuecomment-391413129
 
 
   @akumarb2010 - user has the flexibility to provide conditions w.r.t any of 
the metadata fields (`kafkaMsgOffset`, `kafkaMsgTimestamp`, 
`kafkaPartitionId`). The conditions on these metadata fields is pushed down and 
translated to a **Scan Spec** which is a collection of partition specific scan 
spec.
   
   Let us consider the following Scan spec before pushdown.
   
   ```
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=0, startOffset=0, 
endOffset=1500)
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=1, 
startOffset=2000, endOffset=3000)
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=2, startOffset=25, 
endOffset=3500)
   ```
   
   If the predicates are only on `kafkaMsgOffset` (for example `SELECT * FROM 
kafka.LogEventStream WHERE kafkaMsgOffset >= 1000 AND kafkaMsgOffset < 2000)`, 
this will apply the pushdown to ALL partitions within a topic. If there is a 
partition where such offsets do not exist (either the offsets have expired or 
messages for those offsets are yet to be produced), then such partition will 
not be scanned.
   
   The scan spec in this case would be
   ```
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=0, 
startOffset=1000, endOffset=1500)
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=1, 
startOffset=1200, endOffset=2000)
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=2, 
startOffset=1000, endOffset=2000)
   ```
   
   I am not sure if we should introduce per partition semantics in the query 
and would prefer keeping it generic. The *scenario you mentioned in (1)* can 
still be addressed with following predicates
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 1 AND 
kafkaMsgOffset >= 1000 AND kafkaMsgOffset < 2000) OR (kafkaPartitionId = 2 AND 
kafkaMsgOffset >= 1500 AND kafkaMsgOffset < 5000)
   ```
   
   The scan spec in this case would be as follows (notice that there is no 
`partitionId 0`).
   ```
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=1, 
startOffset=1200, endOffset=2000)
   KafkaPartitionScanSpec(topicName=TopicTable, partitionId=2, 
startOffset=1000, endOffset=2000)
   ```
   
   This applies to conditions on `kafkaMsgTimestamp` as well. Most common use 
case is - user wants to view messages that belong to a specific time window 
(not very specific about the partition). This can be done by having predicates 
on `kafkaMsgTimestamp` alone without having to specify `kafkaPartitionId`.
   ```
   SELECT * FROM kafka.LogEventStream WHERE kafkaMsgTimestamp > 1527092007199 
AND kafkaMsgOffset < 1527092031717
   ```
   
   However user can also specify conditions specific to partition - *scenario 
mentioned in (2)*
   
   ```
   SELECT * FROM kafka.LogEventStream WHERE (kafkaPartitionId = 1 AND 
kafkaMsgTimestamp > 1527092007199 AND kafkaMsgOffset < 1527092031717) OR 
(kafkaPartitionId = 2 AND kafkaMsgTimestamp > 1527092133501)
   ```
   
   This would also avoid the problem of calling `offsetsForTimes` on 
non-existing partitions. Since such partitions are filtered out from the scan 
spec.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6442) Adjust Hbase disk cost & row count estimation when filter push down is applied

2018-05-23 Thread Arina Ielchiieva (JIRA)
Arina Ielchiieva created DRILL-6442:
---

 Summary: Adjust Hbase disk cost & row count estimation when filter 
push down is applied
 Key: DRILL-6442
 URL: https://issues.apache.org/jira/browse/DRILL-6442
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Arina Ielchiieva
Assignee: Arina Ielchiieva
 Fix For: 1.14.0


Disk cost for Hbase scan is calculated based on scan size in bytes.

{noformat}
float diskCost = scanSizeInBytes * ((columns == null || columns.isEmpty()) ? 1 
: columns.size() / statsCalculator.getColsPerRow());
{noformat}

Scan size is bytes is estimated using {{TableStatsCalculator}} with the help of 
sampling.
When we estimate size for the first time (before applying filter push down), 
for sampling we use random rows. When estimating rows after filter push down, 
for sampling we use rows that qualify filter condition. It can happen that 
average row size can be higher after filter push down 
than before. Unfortunately since disk cost depends on these calculations, plan 
with filter push down can give higher cost then without it. 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-6435) MappingSet is stateful, so it can't be shared between threads

2018-05-23 Thread Volodymyr Vysotskyi (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487457#comment-16487457
 ] 

Volodymyr Vysotskyi commented on DRILL-6435:


[~vrozov], thanks for catching this issue, agreed that is should be non-static 
as it was done in he most record batch classes.

> MappingSet is stateful, so it can't be shared between threads
> -
>
> Key: DRILL-6435
> URL: https://issues.apache.org/jira/browse/DRILL-6435
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
>
> There are several instances where static {{MappingSet}} instances are used 
> (for example {{NestedLoopJoinBatch}} and {{BaseSortWrapper}}). This causes 
> instance reuse across threads when queries are executed concurrently. As 
> {{MappingSet}} is a stateful class with visitor design pattern, such reuse 
> causes invalid state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5977) predicate pushdown support kafkaMsgOffset

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487375#comment-16487375
 ] 

ASF GitHub Bot commented on DRILL-5977:
---

akumarb2010 commented on issue #1272: DRILL-5977: Filter Pushdown in 
Drill-Kafka plugin
URL: https://github.com/apache/drill/pull/1272#issuecomment-391372423
 
 
   @aravi5  Sorry for the delay in review and thanks for implementing this nice 
feature.
   
   Before starting the code review, I have few comments on push down design.
   
   1. *KafkaMsgOffset* predicates has to be partition specific right? We should 
not be applying these predicates globally across the partitions. For example 
p1[startOffset=1000, endOffset=2000], p2[1500,5000], So in this case, always 
better to consider the offsets per partition. But in test cases, I am not 
partition specific predicates.
   
   2. Good to see that you have considered multiple scenarios for 
*kafkaMsgTimestamp* predicates and the above point will also applicable for 
*kafkaMsgTimestamp* as well. We might need to consider per partition specific 
*kafkaMsgTimestamp* predicates.  But this might cause the issues, as * 
offsetsForTimes* method is a blocking and it can block indefinitely if user 
provides wrong partition.
   
   Can you please clarify above two comments? 
  
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> predicate pushdown support kafkaMsgOffset
> -
>
> Key: DRILL-5977
> URL: https://issues.apache.org/jira/browse/DRILL-5977
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: B Anil Kumar
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.14.0
>
>
> As part of Kafka storage plugin review, below is the suggestion from Paul.
> {noformat}
> Does it make sense to provide a way to select a range of messages: a starting 
> point or a count? Perhaps I want to run my query every five minutes, scanning 
> only those messages since the previous scan. Or, I want to limit my take to, 
> say, the next 1000 messages. Could we use a pseudo-column such as 
> "kafkaMsgOffset" for that purpose? Maybe
> SELECT * FROM  WHERE kafkaMsgOffset > 12345
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6425) Upgrade mapr release version

2018-05-23 Thread Vlad Rozov (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vlad Rozov updated DRILL-6425:
--
Summary: Upgrade mapr release version  (was: Upgrade org.ojai:ojai version)

> Upgrade mapr release version
> 
>
> Key: DRILL-6425
> URL: https://issues.apache.org/jira/browse/DRILL-6425
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
> Fix For: 1.14.0
>
>
> Upgrade from {{1.1}} to {{2.0.1-mapr-1804}} or the most recent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6425) Upgrade mapr release version

2018-05-23 Thread Vlad Rozov (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vlad Rozov updated DRILL-6425:
--
Description: Upgrade MapR dependendencies from {{5.2.1}} to {{6.0.1}}.  
(was: Upgrade from {{1.1}} to {{2.0.1-mapr-1804}} or the most recent.)

> Upgrade mapr release version
> 
>
> Key: DRILL-6425
> URL: https://issues.apache.org/jira/browse/DRILL-6425
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Major
> Fix For: 1.14.0
>
>
> Upgrade MapR dependendencies from {{5.2.1}} to {{6.0.1}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)