[jira] [Commented] (DRILL-5902) Regression: Queries encounter random failure due to RPC connection timed out

2017-10-23 Thread Robert Hou (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216354#comment-16216354
 ] 

Robert Hou commented on DRILL-5902:
---

The previous run using commit:
{noformat}
1.12.0-SNAPSHOT f1d1945b3772bb782039fd6811e34a7de66441c8DRILL-5582: C++ 
Client: [Threat Modeling] Drillbit may be spoofed by an attacker and this may 
lead to data being written to the attacker's target instead of Drillbit   
19.10.2017 @ 17:13:05 PDT   Unknown 19.10.2017 @ 18:37:19 PDT
{noformat}
was clean.  This does not mean that one of the later commits caused the problem 
because these are random failures.

> Regression: Queries encounter random failure due to RPC connection timed out
> 
>
> Key: DRILL-5902
> URL: https://issues.apache.org/jira/browse/DRILL-5902
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Priority: Critical
> Attachments: 261230f7-e3b9-0cee-22d8-921cb56e3e12.sys.drill, 
> node196.drillbit.log
>
>
> Multiple random failures (25) occurred with the latest 
> Functional-Baseline-88.193 run.  Here is a sample query:
> {noformat}
> -- Kitchen sink
> -- Use all supported functions
> select
> rank()  over W,
> dense_rank()over W,
> percent_rank()  over W,
> cume_dist() over W,
> avg(c_integer + c_integer)  over W,
> sum(c_integer/100)  over W,
> count(*)over W,
> min(c_integer)  over W,
> max(c_integer)  over W,
> row_number()over W
> from
> j7
> where
> c_boolean is not null
> window  W as (partition by c_bigint, c_date, c_time, c_boolean order by 
> c_integer)
> {noformat}
> From the logs:
> {noformat}
> 2017-10-23 04:14:36,536 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:1 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:5 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:9 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:13 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:17 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,538 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:21 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,538 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:25 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> {noformat}
> {noformat}
> 2017-10-23 04:14:53,941 [UserServer-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.88.196:31010 <--> 
> /10.10.88.193:38281 (user server) timed out.  Timeout was set to 30 seconds. 
> Closing connection.
> 2017-10-23 04:14:53,952 [UserServer-1] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 261230f8-2698-15b2-952f-d4ade8d6b180:0:0: State change requested RUNNING --> 
> FAILED
> 2017-10-23 04:14:53,952 [261230f8-2698-15b2-952f-d4ade8d6b180:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 261230f8-2698-15b2-952f-d4ade8d6b180:0:0: State change requested FAILED --> 
> FINISHED
> 2017-10-23 04:14:53,956 [UserServer-1] WARN  
> o.apache.drill.exec.rpc.RequestIdMap - Failure while attempting to fail rpc 
> response.
> java.lang.IllegalArgumentException: Self-suppression not permitted
> at java.lang.Throwable.addSuppressed(Throwable.java:1043) 
> ~[na:1.7.0_45]
> at 
> 

[jira] [Updated] (DRILL-5902) Regression: Queries encounter random failure due to RPC connection timed out

2017-10-23 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou updated DRILL-5902:
--
Attachment: 261230f7-e3b9-0cee-22d8-921cb56e3e12.sys.drill
node196.drillbit.log

> Regression: Queries encounter random failure due to RPC connection timed out
> 
>
> Key: DRILL-5902
> URL: https://issues.apache.org/jira/browse/DRILL-5902
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - RPC
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Priority: Critical
> Attachments: 261230f7-e3b9-0cee-22d8-921cb56e3e12.sys.drill, 
> node196.drillbit.log
>
>
> Multiple random failures (25) occurred with the latest 
> Functional-Baseline-88.193 run.  Here is a sample query:
> {noformat}
> -- Kitchen sink
> -- Use all supported functions
> select
> rank()  over W,
> dense_rank()over W,
> percent_rank()  over W,
> cume_dist() over W,
> avg(c_integer + c_integer)  over W,
> sum(c_integer/100)  over W,
> count(*)over W,
> min(c_integer)  over W,
> max(c_integer)  over W,
> row_number()over W
> from
> j7
> where
> c_boolean is not null
> window  W as (partition by c_bigint, c_date, c_time, c_boolean order by 
> c_integer)
> {noformat}
> From the logs:
> {noformat}
> 2017-10-23 04:14:36,536 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:1 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:5 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:9 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:13 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:17 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,538 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:21 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> 2017-10-23 04:14:36,538 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler 
> - Dropping request for early fragment termination for path 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:1:25 -> 
> 261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
> {noformat}
> {noformat}
> 2017-10-23 04:14:53,941 [UserServer-1] INFO  
> o.a.drill.exec.rpc.user.UserServer - RPC connection /10.10.88.196:31010 <--> 
> /10.10.88.193:38281 (user server) timed out.  Timeout was set to 30 seconds. 
> Closing connection.
> 2017-10-23 04:14:53,952 [UserServer-1] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 261230f8-2698-15b2-952f-d4ade8d6b180:0:0: State change requested RUNNING --> 
> FAILED
> 2017-10-23 04:14:53,952 [261230f8-2698-15b2-952f-d4ade8d6b180:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 261230f8-2698-15b2-952f-d4ade8d6b180:0:0: State change requested FAILED --> 
> FINISHED
> 2017-10-23 04:14:53,956 [UserServer-1] WARN  
> o.apache.drill.exec.rpc.RequestIdMap - Failure while attempting to fail rpc 
> response.
> java.lang.IllegalArgumentException: Self-suppression not permitted
> at java.lang.Throwable.addSuppressed(Throwable.java:1043) 
> ~[na:1.7.0_45]
> at 
> org.apache.drill.common.DeferredException.addException(DeferredException.java:88)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:97)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:413)
>  

[jira] [Created] (DRILL-5902) Regression: Queries encounter random failure due to RPC connection timed out

2017-10-23 Thread Robert Hou (JIRA)
Robert Hou created DRILL-5902:
-

 Summary: Regression: Queries encounter random failure due to RPC 
connection timed out
 Key: DRILL-5902
 URL: https://issues.apache.org/jira/browse/DRILL-5902
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - RPC
Affects Versions: 1.11.0
Reporter: Robert Hou
Priority: Critical


Multiple random failures (25) occurred with the latest 
Functional-Baseline-88.193 run.  Here is a sample query:

{noformat}
-- Kitchen sink
-- Use all supported functions
select
rank()  over W,
dense_rank()over W,
percent_rank()  over W,
cume_dist() over W,
avg(c_integer + c_integer)  over W,
sum(c_integer/100)  over W,
count(*)over W,
min(c_integer)  over W,
max(c_integer)  over W,
row_number()over W
from
j7
where
c_boolean is not null
window  W as (partition by c_bigint, c_date, c_time, c_boolean order by 
c_integer)
{noformat}

>From the logs:
{noformat}
2017-10-23 04:14:36,536 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:1 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:5 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:9 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:13 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
2017-10-23 04:14:36,537 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:17 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
2017-10-23 04:14:36,538 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:21 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
2017-10-23 04:14:36,538 [BitServer-7] WARN  o.a.d.e.w.b.ControlMessageHandler - 
Dropping request for early fragment termination for path 
261230e8-d03e-9ca9-91bf-c1039deecde2:1:25 -> 
261230e8-d03e-9ca9-91bf-c1039deecde2:0:0 as path to executor unavailable.
{noformat}

{noformat}
2017-10-23 04:14:53,941 [UserServer-1] INFO  o.a.drill.exec.rpc.user.UserServer 
- RPC connection /10.10.88.196:31010 <--> /10.10.88.193:38281 (user server) 
timed out.  Timeout was set to 30 seconds. Closing connection.
2017-10-23 04:14:53,952 [UserServer-1] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 261230f8-2698-15b2-952f-d4ade8d6b180:0:0: 
State change requested RUNNING --> FAILED
2017-10-23 04:14:53,952 [261230f8-2698-15b2-952f-d4ade8d6b180:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 261230f8-2698-15b2-952f-d4ade8d6b180:0:0: 
State change requested FAILED --> FINISHED
2017-10-23 04:14:53,956 [UserServer-1] WARN  
o.apache.drill.exec.rpc.RequestIdMap - Failure while attempting to fail rpc 
response.
java.lang.IllegalArgumentException: Self-suppression not permitted
at java.lang.Throwable.addSuppressed(Throwable.java:1043) ~[na:1.7.0_45]
at 
org.apache.drill.common.DeferredException.addException(DeferredException.java:88)
 ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.common.DeferredException.addThrowable(DeferredException.java:97)
 ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.fail(FragmentExecutor.java:413)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.access$700(FragmentExecutor.java:55)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$ExecutorStateImpl.fail(FragmentExecutor.java:427)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.ops.FragmentContext.fail(FragmentContext.java:213) 
~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 

[jira] [Commented] (DRILL-5901) Drill test framework can have successful run even if a random failure occurs

2017-10-23 Thread Abhishek Girish (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216318#comment-16216318
 ] 

Abhishek Girish commented on DRILL-5901:


This is not an Apache Drill issue, so I don't think we need a JIRA for this. We 
just need to update the return code logic, which determines the regression run 
status. 

> Drill test framework can have successful run even if a random failure occurs
> 
>
> Key: DRILL-5901
> URL: https://issues.apache.org/jira/browse/DRILL-5901
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>
> From Jenkins:
> http://10.10.104.91:8080/view/Nightly/job/TPCH-SF100-baseline/574/console
> Random Failures:
> /root/drillAutomation/framework-master/framework/resources/Advanced/tpch/tpch_sf1/original/parquet/query17.sql
> Query: 
> SELECT
>   SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
> FROM
>   lineitem L,
>   part P
> WHERE
>   P.P_PARTKEY = L.L_PARTKEY
>   AND P.P_BRAND = 'BRAND#13'
>   AND P.P_CONTAINER = 'JUMBO CAN'
>   AND L.L_QUANTITY < (
> SELECT
>   0.2 * AVG(L2.L_QUANTITY)
> FROM
>   lineitem L2
> WHERE
>   L2.L_PARTKEY = P.P_PARTKEY
>   )
> Failed with exception
> java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Memory was leaked 
> by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
>   (java.lang.IllegalStateException) Memory was leaked by query. Memory 
> leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> org.apache.drill.exec.memory.BaseAllocator.close():519
> org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
> org.apache.drill.exec.ops.OperatorContextImpl.close():108
> org.apache.drill.exec.ops.FragmentContext.suppressingClose():435
> org.apache.drill.exec.ops.FragmentContext.close():424
> 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():324
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():155
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():267
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():744
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
>   at 
> org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
>   at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:110)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:130)
>   at 
> org.apache.drill.jdbc.impl.DrillStatementImpl.executeQuery(DrillStatementImpl.java:112)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:206)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:115)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: 
> SYSTEM ERROR: IllegalStateException: Memory was leaked by query. Memory 
> leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
>   (java.lang.IllegalStateException) Memory was leaked by query. Memory 
> leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> 

[jira] [Commented] (DRILL-5900) Regression: TPCH query encounters random IllegalStateException: Memory was leaked by query

2017-10-23 Thread Paul Rogers (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216181#comment-16216181
 ] 

Paul Rogers commented on DRILL-5900:


The key line is this one:

{noformat}
Memory leaked: (2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)
{noformat}

This is a single buffer of size 0x20. (Or, two buffers half that size, etc.)

Does Parquet allocate a working buffer of this size that it fails to release 
under some situations?



> Regression: TPCH query encounters random IllegalStateException: Memory was 
> leaked by query
> --
>
> Key: DRILL-5900
> URL: https://issues.apache.org/jira/browse/DRILL-5900
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Attachments: 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f.sys.drill, 
> drillbit.log.node81, drillbit.log.node88
>
>
> This is a random failure.  This test has passed before.
> TPCH query 6:
> {noformat}
> SELECT
>   SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
> FROM
>   lineitem L,
>   part P
> WHERE
>   P.P_PARTKEY = L.L_PARTKEY
>   AND P.P_BRAND = 'BRAND#13'
>   AND P.P_CONTAINER = 'JUMBO CAN'
>   AND L.L_QUANTITY < (
> SELECT
>   0.2 * AVG(L2.L_QUANTITY)
> FROM
>   lineitem L2
> WHERE
>   L2.L_PARTKEY = P.P_PARTKEY
>   )
> {noformat}
> Error is:
> {noformat}
> 2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:8:2] ERROR 
> o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IllegalStateException: 
> Memory was leaked by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> IllegalStateException: Memory was leaked by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:298)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> Caused by: java.lang.IllegalStateException: Memory was leaked by query. 
> Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:519) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.AbstractOperatorExecContext.close(AbstractOperatorExecContext.java:86)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.OperatorContextImpl.close(OperatorContextImpl.java:108)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.FragmentContext.suppressingClose(FragmentContext.java:435)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.FragmentContext.close(FragmentContext.java:424) 
> ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:324)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:155)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> ... 5 common frames omitted
> 2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:6:0] INFO  
> 

[jira] [Commented] (DRILL-5795) Filter pushdown for parquet handles multi rowgroup file

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216117#comment-16216117
 ] 

ASF GitHub Bot commented on DRILL-5795:
---

Github user cchang738 commented on the issue:

https://github.com/apache/drill/pull/949
  
There is a plan verification failure due to plan change. The plan baseline 
needs to be changed after this PR is merged.

Plan Verification Failures:
/root/drillAutomation/mapr/framework/resources/Functional/int96/q28.q
Query: 
explain plan for select voter_id, name from `hive1_parquet_part` where 
date_part('year', create_timestamp1)=2018

Expected and actual text plans are different.
Expected:
.*numFiles=2, usedMetadataFile=true.*

Actual:
00-00Screen
00-01  Project(voter_id=[$0], name=[$1])
00-02Project(voter_id=[$1], name=[$2])
00-03  Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
[path=/drill/testdata/subqueries/hive1_parquet_part/0_0_10.parquet], 
ReadEntryWithPath 
[path=/drill/testdata/subqueries/hive1_parquet_part/0_0_9.parquet]], 
selectionRoot=/drill/testdata/subqueries/hive1_parquet_part, numFiles=2, 
numRowGroups=2, usedMetadataFile=true, 
cacheFileRoot=/drill/testdata/subqueries/hive1_parquet_part, 
columns=[`create_timestamp1`, `voter_id`, `name`]]])


> Filter pushdown for parquet handles multi rowgroup file
> ---
>
> Key: DRILL-5795
> URL: https://issues.apache.org/jira/browse/DRILL-5795
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Affects Versions: 1.11.0
>Reporter: Damien Profeta
>Assignee: Damien Profeta
>  Labels: doc-impacting
> Fix For: 1.12.0
>
> Attachments: multirowgroup_overlap.parquet
>
>
> DRILL-1950 implemented the filter pushdown for parquet file but only in the 
> case of one rowgroup per parquet file. In the case of multiple rowgroups per 
> files, it detects that the rowgroup can be pruned but then tell to the 
> drillbit to read the whole file which leads to performance issue.
> Having multiple rowgroup per file helps to handle partitioned dataset and 
> still read only the relevant subset of data without ending with more file 
> than really needed.
> Let's say for instance you have a Parquet file composed of RG1 and RG2 with 
> only one column a. Min/max in RG1 are 1-2 and min/max in RG2 are 2-3.
> If I do "select a from file where a=3", today it will read the whole file, 
> with the patch it will only read RG2.
> *For documentation*
> Support / Other section in 
> https://drill.apache.org/docs/parquet-filter-pushdown/ should be updated.
> After the fix files with multiple row groups will be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5901) Drill test framework can have successful run even if a random failure occurs

2017-10-23 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou updated DRILL-5901:
--
Description: 
>From Jenkins:
http://10.10.104.91:8080/view/Nightly/job/TPCH-SF100-baseline/574/console


Random Failures:
/root/drillAutomation/framework-master/framework/resources/Advanced/tpch/tpch_sf1/original/parquet/query17.sql
Query: 
SELECT
  SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
FROM
  lineitem L,
  part P
WHERE
  P.P_PARTKEY = L.L_PARTKEY
  AND P.P_BRAND = 'BRAND#13'
  AND P.P_CONTAINER = 'JUMBO CAN'
  AND L.L_QUANTITY < (
SELECT
  0.2 * AVG(L2.L_QUANTITY)
FROM
  lineitem L2
WHERE
  L2.L_PARTKEY = P.P_PARTKEY
  )
Failed with exception
java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Memory was leaked 
by query. Memory leaked: (2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)


Fragment 8:2

[Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]

  (java.lang.IllegalStateException) Memory was leaked by query. Memory leaked: 
(2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)

org.apache.drill.exec.memory.BaseAllocator.close():519
org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
org.apache.drill.exec.ops.OperatorContextImpl.close():108
org.apache.drill.exec.ops.FragmentContext.suppressingClose():435
org.apache.drill.exec.ops.FragmentContext.close():424
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():324
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():155
org.apache.drill.exec.work.fragment.FragmentExecutor.run():267
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():744

at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:110)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:130)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.executeQuery(DrillStatementImpl.java:112)
at 
org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:206)
at 
org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: SYSTEM 
ERROR: IllegalStateException: Memory was leaked by query. Memory leaked: 
(2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)


Fragment 8:2

[Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]

  (java.lang.IllegalStateException) Memory was leaked by query. Memory leaked: 
(2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)

org.apache.drill.exec.memory.BaseAllocator.close():519
org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
org.apache.drill.exec.ops.OperatorContextImpl.close():108
org.apache.drill.exec.ops.FragmentContext.suppressingClose():435
org.apache.drill.exec.ops.FragmentContext.close():424
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():324
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():155
org.apache.drill.exec.work.fragment.FragmentExecutor.run():267
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():744

at 

[jira] [Commented] (DRILL-5901) Drill test framework can have successful run even if a random failure occurs

2017-10-23 Thread Abhishek Girish (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216048#comment-16216048
 ] 

Abhishek Girish commented on DRILL-5901:


What is the expectation here? It's reported as a random failure, which is as 
per design. 

> Drill test framework can have successful run even if a random failure occurs
> 
>
> Key: DRILL-5901
> URL: https://issues.apache.org/jira/browse/DRILL-5901
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>
> Random Failures:
> /root/drillAutomation/framework-master/framework/resources/Advanced/tpch/tpch_sf1/original/parquet/query17.sql
> Query: 
> SELECT
>   SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
> FROM
>   lineitem L,
>   part P
> WHERE
>   P.P_PARTKEY = L.L_PARTKEY
>   AND P.P_BRAND = 'BRAND#13'
>   AND P.P_CONTAINER = 'JUMBO CAN'
>   AND L.L_QUANTITY < (
> SELECT
>   0.2 * AVG(L2.L_QUANTITY)
> FROM
>   lineitem L2
> WHERE
>   L2.L_PARTKEY = P.P_PARTKEY
>   )
> Failed with exception
> java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Memory was leaked 
> by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
>   (java.lang.IllegalStateException) Memory was leaked by query. Memory 
> leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> org.apache.drill.exec.memory.BaseAllocator.close():519
> org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
> org.apache.drill.exec.ops.OperatorContextImpl.close():108
> org.apache.drill.exec.ops.FragmentContext.suppressingClose():435
> org.apache.drill.exec.ops.FragmentContext.close():424
> 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():324
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():155
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():267
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():744
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
>   at 
> org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
>   at 
> org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
>   at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:110)
>   at 
> oadd.org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:130)
>   at 
> org.apache.drill.jdbc.impl.DrillStatementImpl.executeQuery(DrillStatementImpl.java:112)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:206)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:115)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: 
> SYSTEM ERROR: IllegalStateException: Memory was leaked by query. Memory 
> leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
>   (java.lang.IllegalStateException) Memory was leaked by query. Memory 
> leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> org.apache.drill.exec.memory.BaseAllocator.close():519
> org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
> 

[jira] [Created] (DRILL-5901) Drill test framework can have successful run even if a random failure occurs

2017-10-23 Thread Robert Hou (JIRA)
Robert Hou created DRILL-5901:
-

 Summary: Drill test framework can have successful run even if a 
random failure occurs
 Key: DRILL-5901
 URL: https://issues.apache.org/jira/browse/DRILL-5901
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build & Test
Affects Versions: 1.11.0
Reporter: Robert Hou


Random Failures:
/root/drillAutomation/framework-master/framework/resources/Advanced/tpch/tpch_sf1/original/parquet/query17.sql
Query: 
SELECT
  SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
FROM
  lineitem L,
  part P
WHERE
  P.P_PARTKEY = L.L_PARTKEY
  AND P.P_BRAND = 'BRAND#13'
  AND P.P_CONTAINER = 'JUMBO CAN'
  AND L.L_QUANTITY < (
SELECT
  0.2 * AVG(L2.L_QUANTITY)
FROM
  lineitem L2
WHERE
  L2.L_PARTKEY = P.P_PARTKEY
  )
Failed with exception
java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Memory was leaked 
by query. Memory leaked: (2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)


Fragment 8:2

[Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]

  (java.lang.IllegalStateException) Memory was leaked by query. Memory leaked: 
(2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)

org.apache.drill.exec.memory.BaseAllocator.close():519
org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
org.apache.drill.exec.ops.OperatorContextImpl.close():108
org.apache.drill.exec.ops.FragmentContext.suppressingClose():435
org.apache.drill.exec.ops.FragmentContext.close():424
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():324
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():155
org.apache.drill.exec.work.fragment.FragmentExecutor.run():267
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():744

at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1895)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:61)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:473)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1100)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:477)
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:181)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:110)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:130)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.executeQuery(DrillStatementImpl.java:112)
at 
org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:206)
at 
org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: SYSTEM 
ERROR: IllegalStateException: Memory was leaked by query. Memory leaked: 
(2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)


Fragment 8:2

[Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]

  (java.lang.IllegalStateException) Memory was leaked by query. Memory leaked: 
(2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)

org.apache.drill.exec.memory.BaseAllocator.close():519
org.apache.drill.exec.ops.AbstractOperatorExecContext.close():86
org.apache.drill.exec.ops.OperatorContextImpl.close():108
org.apache.drill.exec.ops.FragmentContext.suppressingClose():435
org.apache.drill.exec.ops.FragmentContext.close():424
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources():324
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup():155
org.apache.drill.exec.work.fragment.FragmentExecutor.run():267
org.apache.drill.common.SelfCleaningRunnable.run():38

[jira] [Commented] (DRILL-5900) Regression: TPCH query encounters random IllegalStateException: Memory was leaked by query

2017-10-23 Thread Robert Hou (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216035#comment-16216035
 ] 

Robert Hou commented on DRILL-5900:
---

The foreman is in node 81.  Node 88 encounters the memory error.

> Regression: TPCH query encounters random IllegalStateException: Memory was 
> leaked by query
> --
>
> Key: DRILL-5900
> URL: https://issues.apache.org/jira/browse/DRILL-5900
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Attachments: 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f.sys.drill, 
> drillbit.log.node81, drillbit.log.node88
>
>
> This is a random failure.  This test has passed before.
> TPCH query 6:
> {noformat}
> SELECT
>   SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
> FROM
>   lineitem L,
>   part P
> WHERE
>   P.P_PARTKEY = L.L_PARTKEY
>   AND P.P_BRAND = 'BRAND#13'
>   AND P.P_CONTAINER = 'JUMBO CAN'
>   AND L.L_QUANTITY < (
> SELECT
>   0.2 * AVG(L2.L_QUANTITY)
> FROM
>   lineitem L2
> WHERE
>   L2.L_PARTKEY = P.P_PARTKEY
>   )
> {noformat}
> Error is:
> {noformat}
> 2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:8:2] ERROR 
> o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IllegalStateException: 
> Memory was leaked by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> IllegalStateException: Memory was leaked by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:298)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> Caused by: java.lang.IllegalStateException: Memory was leaked by query. 
> Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:519) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.AbstractOperatorExecContext.close(AbstractOperatorExecContext.java:86)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.OperatorContextImpl.close(OperatorContextImpl.java:108)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.FragmentContext.suppressingClose(FragmentContext.java:435)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.FragmentContext.close(FragmentContext.java:424) 
> ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:324)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:155)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> ... 5 common frames omitted
> 2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:6:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:6:0: State to report: FINISHED
> {noformat}
> sys.version is:
> 1.12.0-SNAPSHOT   b0c4e0486d6d4620b04a1bb8198e959d433b4840
> DRILL-5876: Use openssl profile to include netty-tcnative dependency with the 
> 

[jira] [Updated] (DRILL-5900) Regression: TPCH query encounters random IllegalStateException: Memory was leaked by query

2017-10-23 Thread Robert Hou (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Hou updated DRILL-5900:
--
Attachment: 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f.sys.drill
drillbit.log.node81
drillbit.log.node88

> Regression: TPCH query encounters random IllegalStateException: Memory was 
> leaked by query
> --
>
> Key: DRILL-5900
> URL: https://issues.apache.org/jira/browse/DRILL-5900
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Pritesh Maker
>Priority: Blocker
> Attachments: 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f.sys.drill, 
> drillbit.log.node81, drillbit.log.node88
>
>
> This is a random failure.  This test has passed before.
> TPCH query 6:
> {noformat}
> SELECT
>   SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
> FROM
>   lineitem L,
>   part P
> WHERE
>   P.P_PARTKEY = L.L_PARTKEY
>   AND P.P_BRAND = 'BRAND#13'
>   AND P.P_CONTAINER = 'JUMBO CAN'
>   AND L.L_QUANTITY < (
> SELECT
>   0.2 * AVG(L2.L_QUANTITY)
> FROM
>   lineitem L2
> WHERE
>   L2.L_PARTKEY = P.P_PARTKEY
>   )
> {noformat}
> Error is:
> {noformat}
> 2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:8:2] ERROR 
> o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IllegalStateException: 
> Memory was leaked by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> IllegalStateException: Memory was leaked by query. Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> Fragment 8:2
> [Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
>  ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:298)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_51]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_51]
> at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
> Caused by: java.lang.IllegalStateException: Memory was leaked by query. 
> Memory leaked: (2097152)
> Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
> (res/actual/peak/limit)
> at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:519) 
> ~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.AbstractOperatorExecContext.close(AbstractOperatorExecContext.java:86)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.OperatorContextImpl.close(OperatorContextImpl.java:108)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.FragmentContext.suppressingClose(FragmentContext.java:435)
>  ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.ops.FragmentContext.close(FragmentContext.java:424) 
> ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:324)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:155)
>  [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
> ... 5 common frames omitted
> 2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:6:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:6:0: State to report: FINISHED
> {noformat}
> sys.version is:
> 1.12.0-SNAPSHOT   b0c4e0486d6d4620b04a1bb8198e959d433b4840
> DRILL-5876: Use openssl profile to include 

[jira] [Created] (DRILL-5900) Regression: TPCH query encounters random IllegalStateException: Memory was leaked by query

2017-10-23 Thread Robert Hou (JIRA)
Robert Hou created DRILL-5900:
-

 Summary: Regression: TPCH query encounters random 
IllegalStateException: Memory was leaked by query
 Key: DRILL-5900
 URL: https://issues.apache.org/jira/browse/DRILL-5900
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 1.11.0
Reporter: Robert Hou
Assignee: Pritesh Maker
Priority: Blocker


This is a random failure.  This test has passed before.

TPCH query 6:
{noformat}
SELECT
  SUM(L.L_EXTENDEDPRICE) / 7.0 AS AVG_YEARLY
FROM
  lineitem L,
  part P
WHERE
  P.P_PARTKEY = L.L_PARTKEY
  AND P.P_BRAND = 'BRAND#13'
  AND P.P_CONTAINER = 'JUMBO CAN'
  AND L.L_QUANTITY < (
SELECT
  0.2 * AVG(L2.L_QUANTITY)
FROM
  lineitem L2
WHERE
  L2.L_PARTKEY = P.P_PARTKEY
  )
{noformat}

Error is:
{noformat}
2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:8:2] ERROR 
o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IllegalStateException: 
Memory was leaked by query. Memory leaked: (2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)


Fragment 8:2

[Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
IllegalStateException: Memory was leaked by query. Memory leaked: (2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)


Fragment 8:2

[Error Id: f21a2560-7259-4e13-88c2-9bac29e2930a on atsqa6c88.qa.lab:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
 ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:298)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
Caused by: java.lang.IllegalStateException: Memory was leaked by query. Memory 
leaked: (2097152)
Allocator(op:8:2:6:ParquetRowGroupScan) 100/0/7675904/100 
(res/actual/peak/limit)

at 
org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:519) 
~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.ops.AbstractOperatorExecContext.close(AbstractOperatorExecContext.java:86)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.ops.OperatorContextImpl.close(OperatorContextImpl.java:108)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.ops.FragmentContext.suppressingClose(FragmentContext.java:435)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.ops.FragmentContext.close(FragmentContext.java:424) 
~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.closeOutResources(FragmentExecutor.java:324)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:155)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
... 5 common frames omitted
2017-10-23 10:34:55,989 [2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:frag:6:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 2611d7c0-b0c9-a93e-c64d-a4ef8f4baf8f:6:0: 
State to report: FINISHED
{noformat}

sys.version is:
1.12.0-SNAPSHOT b0c4e0486d6d4620b04a1bb8198e959d433b4840DRILL-5876: Use 
openssl profile to include netty-tcnative dependency with the platform specific 
classifier  20.10.2017 @ 16:52:35 PDT

The previous version that ran clean is this commit:
{noformat}
1.12.0-SNAPSHOT f1d1945b3772bb782039fd6811e34a7de66441c8DRILL-5582: C++ 
Client: [Threat Modeling] Drillbit may be spoofed by an attacker and this may 
lead to data being written to the attacker's target instead of Drillbit   
19.10.2017 @ 17:13:05 PDT
{noformat}

But since the failure is random, the problem could have been introduced earlier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5769) IndexOutOfBoundsException when querying JSON files

2017-10-23 Thread David Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216005#comment-16216005
 ] 

David Lee commented on DRILL-5769:
--

I finally debugged which json key value was causing my IOBE exception by 
trimming my json file down to 
 349 megs with around 10,200 json records. Unfortunately I couldn't generate a 
smaller test file which duplicates the problem.

349939049 Oct 23 16:50 test.json

The JSON file contains multiple nested DealingSchedule objects like:

"DealingSchedule": {"DealingTime": {"CutOffTimeDetail": 
[{"CutOffTimeDetailTimeZone": "1", "CutOffTimeDetail_CountryId": "CU$AUS", 
"CutOffTime": "15:00", "DealingType": "3"}]}}

"DealingSchedule": {"DealingTime": {"CutOffTimeDetail": 
[{"CutOffTimeDetailTimeZone": "1", "CutOffTimeDetail_CountryId": "CU$AUS", 
"CutOffTime": "11:00", "DealingType": "3"}]}}

"DealingSchedule": {"ValuationTimeTimeZone": "1", "ValuationTime_CountryId": 
"CU$AUS", "ValuationTime": "12:00"}

"DealingSchedule": {"ValuationTimeTimeZone": "3", "ValuationTime_CountryId": 
"CU$CAN", "ValuationTime": "16:00","DealingTime": {"CutOffTimeDetail": 
[{"CutOffTimeDetailTimeZone": "3", "CutOffTimeDetail_CountryId": "CU$CAN", 
"CutOffTime": "16:00", "DealingType": "3"}]}}

Near the end of the file this flavor of DealingSchedule appears which contains 
a DealingTimeDetail array of one record. This is the first time this 
DealingSchedule.DealingTime.DealingTimeDetail key appears in the JSON file:

"DealingSchedule": {"ValuationTime_CountryId": "CU$GBR", "ValuationTime": 
"08:00", "DealingTime": {"DealingTimeDetail": [{"DealingTimeDetail_CountryId": 
"CU$GBR", "StartTime": "09:00", "EndTime": "17:00"}]}}

It produces the following error:

org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
IndexOutOfBoundsException: index: 16384, length: 4 (expected: range(0, 16384)) 
Fragment 0:0 [Error Id: 4d7e60fb-b7d0-49bd-9cf7-244dc4d7341d on ...

1. If I remove the array[] brackets and turn it into keys it works:
[{"DealingTimeDetail_CountryId": "CU$GBR", "StartTime": "09:00", "EndTime": 
"17:00"}]
to
{"DealingTimeDetail_CountryId": "CU$GBR", "StartTime": "09:00", "EndTime": 
"17:00"}

2. I tried creating a smaller JSON file with just DealingSchedule objects, but 
it would read the file without errors.

3. If add extra records to the array it also returns an IOBE.
{"DealingTimeDetail": [
{"DealingTimeDetail_CountryId": "CU$GBR", "StartTime": "09:00", "EndTime": 
"17:00"},
{"DealingTimeDetail_CountryId": "CU$GBR", "StartTime": "09:00", "EndTime": 
"17:00"}
]}


> IndexOutOfBoundsException when querying JSON files
> --
>
> Key: DRILL-5769
> URL: https://issues.apache.org/jira/browse/DRILL-5769
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server, Storage - JSON
>Affects Versions: 1.10.0
> Environment: *jdk_8u45_x64*
> *single drillbit running on zookeeper*
> *Following options set to TRUE:*
> drill.exec.functions.cast_empty_string_to_null
> store.json.all_text_mode
> store.parquet.enable_dictionary_encoding
> store.parquet.use_new_reader
>Reporter: David Lee
>Assignee: Jinfeng Ni
> Fix For: 1.10.0, 1.11.0, 1.12.0
>
> Attachments: 001.json, 100.json, 111.json
>
>
> *Running the following SQL on these three JSON files fail: *
> 001.json 100.json 111.json
> select t.id
> from dfs.`/tmp/???.json` t
> where t.assetData.debt.couponPaymentFeature.interestBasis = '5'
> *Error:*
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> IndexOutOfBoundsException: index: 1024, length: 1 (expected: range(0, 1024)) 
> Fragment 0:0 [Error Id: ....
> *However running the same SQL on two out of three files works:*
> select t.id
> from dfs.`/tmp/1??.json` t
> where t.assetData.debt.couponPaymentFeature.interestBasis = '5'
> select t.id
> from dfs.`/tmp/?1?.json` t
> where t.assetData.debt.couponPaymentFeature.interestBasis = '5'
> select t.id
> from dfs.`/tmp/??1.json` t
> where t.assetData.debt.couponPaymentFeature.interestBasis = '5'
> *Changing the selected column from t.id to t.* also works: *
> select *
> from dfs.`/tmp/???.json` t
> where t.assetData.debt.couponPaymentFeature.interestBasis = '5'



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5664) Enable security for Drill HiveStoragePlugin based on a config parameter

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215963#comment-16215963
 ] 

ASF GitHub Bot commented on DRILL-5664:
---

Github user sohami commented on the issue:

https://github.com/apache/drill/pull/870
  
closing this PR pending the revised version.


> Enable security for Drill HiveStoragePlugin based on a config parameter
> ---
>
> Key: DRILL-5664
> URL: https://issues.apache.org/jira/browse/DRILL-5664
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>
> For enabling security on DrillClient to Drillbit and Drillbit to Drillbit 
> channel we have a configuration. But this doesn't ensure that Storage Plugin 
> channel is also configured with security turned on. For example: When 
> security is enabled on Drill side then HiveStoragePlugin which Drill uses 
> doesn't open secure channel to HiveMetastore by default unless someone 
> manually change the HiveStoragePluginConfig. 
> With this JIRA we are introducing a new config option 
> _security.storage_plugin.enabled: false_ based on which Drill can update the 
> StoragePlugin config's to enable/disable security. When this config is set to 
> true/false then for now Drill will update the HiveStoragePlugin config to set 
> the value of _hive.metastore.sasl.enabled_ as true/false. So that when Drill 
> connects to Metastore it does so in secured way. But if an user tries to 
> update the config later which is opposite of what the Drill config says then 
> we will log a warning before updating. 
> Later the same login can be extended for all the other storage plugin's as 
> well to do respective setting change based on the configuration on Drill side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5664) Enable security for Drill HiveStoragePlugin based on a config parameter

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215964#comment-16215964
 ] 

ASF GitHub Bot commented on DRILL-5664:
---

Github user sohami closed the pull request at:

https://github.com/apache/drill/pull/870


> Enable security for Drill HiveStoragePlugin based on a config parameter
> ---
>
> Key: DRILL-5664
> URL: https://issues.apache.org/jira/browse/DRILL-5664
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.11.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>
> For enabling security on DrillClient to Drillbit and Drillbit to Drillbit 
> channel we have a configuration. But this doesn't ensure that Storage Plugin 
> channel is also configured with security turned on. For example: When 
> security is enabled on Drill side then HiveStoragePlugin which Drill uses 
> doesn't open secure channel to HiveMetastore by default unless someone 
> manually change the HiveStoragePluginConfig. 
> With this JIRA we are introducing a new config option 
> _security.storage_plugin.enabled: false_ based on which Drill can update the 
> StoragePlugin config's to enable/disable security. When this config is set to 
> true/false then for now Drill will update the HiveStoragePlugin config to set 
> the value of _hive.metastore.sasl.enabled_ as true/false. So that when Drill 
> connects to Metastore it does so in secured way. But if an user tries to 
> update the config later which is opposite of what the Drill config says then 
> we will log a warning before updating. 
> Later the same login can be extended for all the other storage plugin's as 
> well to do respective setting change based on the configuration on Drill side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5898) Query returns columns in the wrong order

2017-10-23 Thread Boaz Ben-Zvi (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215933#comment-16215933
 ] 

Boaz Ben-Zvi commented on DRILL-5898:
-

 The actual results look more correct than the expected results - first there 
are dir0, dir1, dir 2, followed by l_orderkey, .. l_extendedprice, 
However (in both cases) the ORDER BY part of the query seems broken -- e.g. 
look at the l_orderkey, which is out-of-order:  653383 > 653378 < 653380 < 
653413 > 653382 ...


> Query returns columns in the wrong order
> 
>
> Key: DRILL-5898
> URL: https://issues.apache.org/jira/browse/DRILL-5898
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Vitalii Diravka
>Priority: Blocker
> Fix For: 1.12.0
>
>
> This is a regression.  It worked with this commit:
> {noformat}
> f1d1945b3772bb782039fd6811e34a7de66441c8  DRILL-5582: C++ Client: [Threat 
> Modeling] Drillbit may be spoofed by an attacker and this may lead to data 
> being written to the attacker's target instead of Drillbit
> {noformat}
> It fails with this commit, although there are six commits total between the 
> last good one and this one:
> {noformat}
> b0c4e0486d6d4620b04a1bb8198e959d433b4840  DRILL-5876: Use openssl profile 
> to include netty-tcnative dependency with the platform specific classifier
> {noformat}
> Query is:
> {noformat}
> select * from 
> dfs.`/drill/testdata/tpch100_dir_partitioned_5files/lineitem` where 
> dir0=2006 and dir1=12 and dir2=15 and l_discount=0.07 order by l_orderkey, 
> l_extendedprice limit 10
> {noformat}
> Columns are returned in a different order.  Here are the expected results:
> {noformat}
> foxes. furiously final ideas cajol1994-05-27  0.071731.42 4   
> F   653442  4965666.0   1.0 1994-06-23  A   1994-06-22
>   NONESHIP215671  0.07200612  15 (1 time(s))
> lly final account 1994-11-09  0.0745881.783   F   
> 653412  1.320809E7  46.01994-11-24  R   1994-11-08  TAKE 
> BACK RETURNREG AIR 458104  0.08200612  15 (1 time(s))
>  the asymptotes   1997-12-29  0.0760882.8 6   O   653413  
> 1.4271413E7 44.01998-02-04  N   1998-01-20  DELIVER IN 
> PERSON   MAIL21456   0.05200612  15 (1 time(s))
> carefully a   1996-09-23  0.075381.88 2   O   653378  
> 1.6702792E7 3.0 1996-11-14  N   1996-10-15  NONEREG 
> AIR 952809  0.05200612  15 (1 time(s))
> ly final requests. boldly ironic theo 1995-09-04  0.072019.94 2   
> O   653380  2416094.0   2.0 1995-11-14  N   1995-10-18
>   COLLECT COD FOB 166101  0.02200612  15 (1 time(s))
> alongside of the even, e  1996-02-14  0.0786140.322   
> O   653409  5622872.0   48.01996-05-02  N   1996-04-22
>   NONESHIP372888  0.04200612  15 (1 time(s))
> es. regular instruct  1996-10-18  0.0725194.0 1   O   653382  
> 6048060.0   25.01996-08-29  N   1996-08-20  DELIVER IN 
> PERSON   AIR 798079  0.0 200612  15 (1 time(s))
> en package1993-09-19  0.0718718.322   F   653440  
> 1.372054E7  12.01993-09-12  A   1993-09-09  DELIVER IN 
> PERSON   TRUCK   970554  0.0 200612  15 (1 time(s))
> ly regular deposits snooze. unusual, even 1998-01-18  0.07
> 12427.921   O   653413  2822631.0   8.0 1998-02-09
>   N   1998-02-05  TAKE BACK RETURNREG AIR 322636  0.01
> 200612  15 (1 time(s))
>  ironic ideas. bra1996-10-13  0.0764711.533   O   
> 653383  6806672.0   41.01996-12-06  N   1996-11-10  TAKE 
> BACK RETURNAIR 556691  0.01200612  15 (1 time(s))
> {noformat}
> Here are the actual results:
> {noformat}
> 2006  12  15  653383  6806672 556691  3   41.064711.53
> 0.070.01N   O   1996-11-10  1996-10-13  1996-12-06
>   TAKE BACK RETURNAIR  ironic ideas. bra
> 2006  12  15  653378  16702792952809  2   3.0 5381.88 
> 0.070.05N   O   1996-10-15  1996-09-23  1996-11-14
>   NONEREG AIR carefully a
> 2006  12  15  653380  2416094 166101  2   2.0 2019.94 0.07
> 0.02N   O   1995-10-18  1995-09-04  1995-11-14  
> COLLECT COD FOB ly final requests. boldly ironic theo
> 2006  12  15 

[jira] [Created] (DRILL-5899) No need to do isAscii check for simple pattern matcher

2017-10-23 Thread Padma Penumarthy (JIRA)
Padma Penumarthy created DRILL-5899:
---

 Summary: No need to do isAscii check for simple pattern matcher
 Key: DRILL-5899
 URL: https://issues.apache.org/jira/browse/DRILL-5899
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Flow
Reporter: Padma Penumarthy
Assignee: Padma Penumarthy
Priority: Critical


For simple pattern matcher, we do not have to do isAscii check. 
UTF-8 encoding ensures that no UTF-8 character is a prefix of any other valid 
character. So, for the 4 simple patterns we have i.e. startsWith, endsWith, 
contains and constant, we can get rid of this check. This will help improve 
performance. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5898) Query returns columns in the wrong order

2017-10-23 Thread Robert Hou (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215867#comment-16215867
 ] 

Robert Hou commented on DRILL-5898:
---

This test is part of the Advanced test suite.  The test is 
Advanced/metadata_caching/partition_pruning/data/q2.q.

> Query returns columns in the wrong order
> 
>
> Key: DRILL-5898
> URL: https://issues.apache.org/jira/browse/DRILL-5898
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Relational Operators
>Affects Versions: 1.11.0
>Reporter: Robert Hou
>Assignee: Vitalii Diravka
>Priority: Blocker
> Fix For: 1.12.0
>
>
> This is a regression.  It worked with this commit:
> {noformat}
> f1d1945b3772bb782039fd6811e34a7de66441c8  DRILL-5582: C++ Client: [Threat 
> Modeling] Drillbit may be spoofed by an attacker and this may lead to data 
> being written to the attacker's target instead of Drillbit
> {noformat}
> It fails with this commit, although there are six commits total between the 
> last good one and this one:
> {noformat}
> b0c4e0486d6d4620b04a1bb8198e959d433b4840  DRILL-5876: Use openssl profile 
> to include netty-tcnative dependency with the platform specific classifier
> {noformat}
> Query is:
> {noformat}
> select * from 
> dfs.`/drill/testdata/tpch100_dir_partitioned_5files/lineitem` where 
> dir0=2006 and dir1=12 and dir2=15 and l_discount=0.07 order by l_orderkey, 
> l_extendedprice limit 10
> {noformat}
> Columns are returned in a different order.  Here are the expected results:
> {noformat}
> foxes. furiously final ideas cajol1994-05-27  0.071731.42 4   
> F   653442  4965666.0   1.0 1994-06-23  A   1994-06-22
>   NONESHIP215671  0.07200612  15 (1 time(s))
> lly final account 1994-11-09  0.0745881.783   F   
> 653412  1.320809E7  46.01994-11-24  R   1994-11-08  TAKE 
> BACK RETURNREG AIR 458104  0.08200612  15 (1 time(s))
>  the asymptotes   1997-12-29  0.0760882.8 6   O   653413  
> 1.4271413E7 44.01998-02-04  N   1998-01-20  DELIVER IN 
> PERSON   MAIL21456   0.05200612  15 (1 time(s))
> carefully a   1996-09-23  0.075381.88 2   O   653378  
> 1.6702792E7 3.0 1996-11-14  N   1996-10-15  NONEREG 
> AIR 952809  0.05200612  15 (1 time(s))
> ly final requests. boldly ironic theo 1995-09-04  0.072019.94 2   
> O   653380  2416094.0   2.0 1995-11-14  N   1995-10-18
>   COLLECT COD FOB 166101  0.02200612  15 (1 time(s))
> alongside of the even, e  1996-02-14  0.0786140.322   
> O   653409  5622872.0   48.01996-05-02  N   1996-04-22
>   NONESHIP372888  0.04200612  15 (1 time(s))
> es. regular instruct  1996-10-18  0.0725194.0 1   O   653382  
> 6048060.0   25.01996-08-29  N   1996-08-20  DELIVER IN 
> PERSON   AIR 798079  0.0 200612  15 (1 time(s))
> en package1993-09-19  0.0718718.322   F   653440  
> 1.372054E7  12.01993-09-12  A   1993-09-09  DELIVER IN 
> PERSON   TRUCK   970554  0.0 200612  15 (1 time(s))
> ly regular deposits snooze. unusual, even 1998-01-18  0.07
> 12427.921   O   653413  2822631.0   8.0 1998-02-09
>   N   1998-02-05  TAKE BACK RETURNREG AIR 322636  0.01
> 200612  15 (1 time(s))
>  ironic ideas. bra1996-10-13  0.0764711.533   O   
> 653383  6806672.0   41.01996-12-06  N   1996-11-10  TAKE 
> BACK RETURNAIR 556691  0.01200612  15 (1 time(s))
> {noformat}
> Here are the actual results:
> {noformat}
> 2006  12  15  653383  6806672 556691  3   41.064711.53
> 0.070.01N   O   1996-11-10  1996-10-13  1996-12-06
>   TAKE BACK RETURNAIR  ironic ideas. bra
> 2006  12  15  653378  16702792952809  2   3.0 5381.88 
> 0.070.05N   O   1996-10-15  1996-09-23  1996-11-14
>   NONEREG AIR carefully a
> 2006  12  15  653380  2416094 166101  2   2.0 2019.94 0.07
> 0.02N   O   1995-10-18  1995-09-04  1995-11-14  
> COLLECT COD FOB ly final requests. boldly ironic theo
> 2006  12  15  653413  2822631 322636  1   8.0 12427.92
> 0.070.01N   O   1998-02-05  1998-01-18  1998-02-09
>   TAKE BACK RETURNREG AIR ly regular deposits snooze. unusual, even 

[jira] [Created] (DRILL-5898) Query returns columns in the wrong order

2017-10-23 Thread Robert Hou (JIRA)
Robert Hou created DRILL-5898:
-

 Summary: Query returns columns in the wrong order
 Key: DRILL-5898
 URL: https://issues.apache.org/jira/browse/DRILL-5898
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 1.11.0
Reporter: Robert Hou
Assignee: Vitalii Diravka
Priority: Blocker
 Fix For: 1.12.0


This is a regression.  It worked with this commit:
{noformat}
f1d1945b3772bb782039fd6811e34a7de66441c8DRILL-5582: C++ Client: [Threat 
Modeling] Drillbit may be spoofed by an attacker and this may lead to data 
being written to the attacker's target instead of Drillbit
{noformat}
It fails with this commit, although there are six commits total between the 
last good one and this one:
{noformat}
b0c4e0486d6d4620b04a1bb8198e959d433b4840DRILL-5876: Use openssl profile 
to include netty-tcnative dependency with the platform specific classifier
{noformat}


Query is:
{noformat}
select * from dfs.`/drill/testdata/tpch100_dir_partitioned_5files/lineitem` 
where dir0=2006 and dir1=12 and dir2=15 and l_discount=0.07 order by 
l_orderkey, l_extendedprice limit 10
{noformat}

Columns are returned in a different order.  Here are the expected results:
{noformat}
foxes. furiously final ideas cajol  1994-05-27  0.071731.42 4   
F   653442  4965666.0   1.0 1994-06-23  A   1994-06-22  
NONESHIP215671  0.07200612  15 (1 time(s))
lly final account   1994-11-09  0.0745881.783   F   
653412  1.320809E7  46.01994-11-24  R   1994-11-08  TAKE 
BACK RETURNREG AIR 458104  0.08200612  15 (1 time(s))
 the asymptotes 1997-12-29  0.0760882.8 6   O   653413  
1.4271413E7 44.01998-02-04  N   1998-01-20  DELIVER IN 
PERSON   MAIL21456   0.05200612  15 (1 time(s))
carefully a 1996-09-23  0.075381.88 2   O   653378  
1.6702792E7 3.0 1996-11-14  N   1996-10-15  NONEREG AIR 
952809  0.05200612  15 (1 time(s))
ly final requests. boldly ironic theo   1995-09-04  0.072019.94 2   
O   653380  2416094.0   2.0 1995-11-14  N   1995-10-18  
COLLECT COD FOB 166101  0.02200612  15 (1 time(s))
alongside of the even, e1996-02-14  0.0786140.322   
O   653409  5622872.0   48.01996-05-02  N   1996-04-22  
NONESHIP372888  0.04200612  15 (1 time(s))
es. regular instruct1996-10-18  0.0725194.0 1   O   653382  
6048060.0   25.01996-08-29  N   1996-08-20  DELIVER IN 
PERSON   AIR 798079  0.0 200612  15 (1 time(s))
en package  1993-09-19  0.0718718.322   F   653440  
1.372054E7  12.01993-09-12  A   1993-09-09  DELIVER IN 
PERSON   TRUCK   970554  0.0 200612  15 (1 time(s))
ly regular deposits snooze. unusual, even   1998-01-18  0.07
12427.921   O   653413  2822631.0   8.0 1998-02-09  
N   1998-02-05  TAKE BACK RETURNREG AIR 322636  0.012006
12  15 (1 time(s))
 ironic ideas. bra  1996-10-13  0.0764711.533   O   
653383  6806672.0   41.01996-12-06  N   1996-11-10  TAKE 
BACK RETURNAIR 556691  0.01200612  15 (1 time(s))
{noformat}

Here are the actual results:
{noformat}
200612  15  653383  6806672 556691  3   41.064711.53
0.070.01N   O   1996-11-10  1996-10-13  1996-12-06  
TAKE BACK RETURNAIR  ironic ideas. bra
200612  15  653378  16702792952809  2   3.0 5381.88 
0.070.05N   O   1996-10-15  1996-09-23  1996-11-14  
NONEREG AIR carefully a
200612  15  653380  2416094 166101  2   2.0 2019.94 0.07
0.02N   O   1995-10-18  1995-09-04  1995-11-14  COLLECT 
COD FOB ly final requests. boldly ironic theo
200612  15  653413  2822631 322636  1   8.0 12427.92
0.070.01N   O   1998-02-05  1998-01-18  1998-02-09  
TAKE BACK RETURNREG AIR ly regular deposits snooze. unusual, even 
200612  15  653382  6048060 798079  1   25.025194.0 0.07
0.0 N   O   1996-08-20  1996-10-18  1996-08-29  DELIVER 
IN PERSON   AIR es. regular instruct
200612  15  653442  4965666 215671  4   1.0 1731.42 0.07
0.07A   F   1994-06-22  1994-05-27  1994-06-23  NONE
SHIPfoxes. furiously final ideas cajol
200612   

[jira] [Commented] (DRILL-5879) Optimize "Like" operator

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215837#comment-16215837
 ] 

ASF GitHub Bot commented on DRILL-5879:
---

Github user ppadma commented on a diff in the pull request:

https://github.com/apache/drill/pull/1001#discussion_r146388018
  
--- Diff: 
exec/java-exec/src/main/codegen/templates/CastFunctionsSrcVarLenTargetVarLen.java
 ---
@@ -73,6 +73,9 @@ public void eval() {
 out.start =  in.start;
 if (charCount <= length.value || length.value == 0 ) {
   out.end = in.end;
+  if (charCount == (out.end - out.start)) {
+out.asciiMode = 
org.apache.drill.exec.expr.holders.VarCharHolder.CHAR_MODE_IS_ASCII; // we can 
conclude this string is ASCII
--- End diff --

can you please add comments here ? I am not able to understand this change. 


> Optimize "Like" operator
> 
>
> Key: DRILL-5879
> URL: https://issues.apache.org/jira/browse/DRILL-5879
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
> Environment: * 
>Reporter: salim achouche
>Assignee: salim achouche
>Priority: Minor
> Fix For: 1.12.0
>
>
> Query: select  from  where colA like '%a%' or colA like 
> '%xyz%';
> Improvement Opportunities
> # Avoid isAscii computation (full access of the input string) since we're 
> dealing with the same column twice
> # Optimize the "contains" for-loop 
> Implementation Details
> 1)
> * Added a new integer variable "asciiMode" to the VarCharHolder class
> * The default value is -1 which indicates this info is not known
> * Otherwise this value will be set to either 1 or 0 based on the string being 
> in ASCII mode or Unicode
> * The execution plan already shares the same VarCharHolder instance for all 
> evaluations of the same column value
> * The asciiMode will be correctly set during the first LIKE evaluation and 
> will be reused across other LIKE evaluations
> 2) 
> * The "Contains" LIKE operation is quite expensive as the code needs to 
> access the input string to perform character based comparisons
> * Created 4 versions of the same for-loop to a) make the loop simpler to 
> optimize (Vectorization) and b) minimize comparisons
> Benchmarks
> * Lineitem table 100GB
> * Query: select l_returnflag, count(*) from dfs.`` where l_comment 
> not like '%a%' or l_comment like '%the%' group by l_returnflag
> * Before changes: 33sec
> * After changes: 27sec



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5879) Optimize "Like" operator

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215836#comment-16215836
 ] 

ASF GitHub Bot commented on DRILL-5879:
---

Github user ppadma commented on a diff in the pull request:

https://github.com/apache/drill/pull/1001#discussion_r146392275
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/SqlPatternContainsMatcher.java
 ---
@@ -17,37 +17,166 @@
  */
 package org.apache.drill.exec.expr.fn.impl;
 
-public class SqlPatternContainsMatcher implements SqlPatternMatcher {
+public final class SqlPatternContainsMatcher implements SqlPatternMatcher {
   final String patternString;
   CharSequence charSequenceWrapper;
   final int patternLength;
+  final MatcherFcn matcherFcn;
 
   public SqlPatternContainsMatcher(String patternString, CharSequence 
charSequenceWrapper) {
-this.patternString = patternString;
+this.patternString   = patternString;
 this.charSequenceWrapper = charSequenceWrapper;
-patternLength = patternString.length();
+patternLength= patternString.length();
+
+// The idea is to write loops with simple condition checks to allow 
the Java Hotspot achieve
+// better optimizations (especially vectorization)
+if (patternLength == 1) {
+  matcherFcn = new Matcher1();
--- End diff --

I am not sure if it is a good idea to write special case code for each 
pattern length. It is not easy to maintain. Can you please give more details 
how this is improving performance ?  Are we getting better performance for 
patternLength 1 or 2 or 3 or N or all ? If so, how much and why ? 


> Optimize "Like" operator
> 
>
> Key: DRILL-5879
> URL: https://issues.apache.org/jira/browse/DRILL-5879
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
> Environment: * 
>Reporter: salim achouche
>Assignee: salim achouche
>Priority: Minor
> Fix For: 1.12.0
>
>
> Query: select  from  where colA like '%a%' or colA like 
> '%xyz%';
> Improvement Opportunities
> # Avoid isAscii computation (full access of the input string) since we're 
> dealing with the same column twice
> # Optimize the "contains" for-loop 
> Implementation Details
> 1)
> * Added a new integer variable "asciiMode" to the VarCharHolder class
> * The default value is -1 which indicates this info is not known
> * Otherwise this value will be set to either 1 or 0 based on the string being 
> in ASCII mode or Unicode
> * The execution plan already shares the same VarCharHolder instance for all 
> evaluations of the same column value
> * The asciiMode will be correctly set during the first LIKE evaluation and 
> will be reused across other LIKE evaluations
> 2) 
> * The "Contains" LIKE operation is quite expensive as the code needs to 
> access the input string to perform character based comparisons
> * Created 4 versions of the same for-loop to a) make the loop simpler to 
> optimize (Vectorization) and b) minimize comparisons
> Benchmarks
> * Lineitem table 100GB
> * Query: select l_returnflag, count(*) from dfs.`` where l_comment 
> not like '%a%' or l_comment like '%the%' group by l_returnflag
> * Before changes: 33sec
> * After changes: 27sec



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5879) Optimize "Like" operator

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215835#comment-16215835
 ] 

ASF GitHub Bot commented on DRILL-5879:
---

Github user ppadma commented on a diff in the pull request:

https://github.com/apache/drill/pull/1001#discussion_r146390381
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/CharSequenceWrapper.java
 ---
@@ -53,13 +55,13 @@
   // The end offset into the drill buffer
   private int end;
   // Indicates that the current byte buffer contains only ascii chars
-  private boolean usAscii;
+  private boolean useAscii;
 
   public CharSequenceWrapper() {
   }
 
   public CharSequenceWrapper(int start, int end, DrillBuf buffer) {
-setBuffer(start, end, buffer);
+setBuffer(start, end, buffer, -1);
--- End diff --

what does -1 mean ? Shouldn't it be one of CHAR_MODE_IS_ASCII, 
CHAR_MODE_UNKNOWN or CHAR_MODE_NOT_ASCII ?



> Optimize "Like" operator
> 
>
> Key: DRILL-5879
> URL: https://issues.apache.org/jira/browse/DRILL-5879
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
> Environment: * 
>Reporter: salim achouche
>Assignee: salim achouche
>Priority: Minor
> Fix For: 1.12.0
>
>
> Query: select  from  where colA like '%a%' or colA like 
> '%xyz%';
> Improvement Opportunities
> # Avoid isAscii computation (full access of the input string) since we're 
> dealing with the same column twice
> # Optimize the "contains" for-loop 
> Implementation Details
> 1)
> * Added a new integer variable "asciiMode" to the VarCharHolder class
> * The default value is -1 which indicates this info is not known
> * Otherwise this value will be set to either 1 or 0 based on the string being 
> in ASCII mode or Unicode
> * The execution plan already shares the same VarCharHolder instance for all 
> evaluations of the same column value
> * The asciiMode will be correctly set during the first LIKE evaluation and 
> will be reused across other LIKE evaluations
> 2) 
> * The "Contains" LIKE operation is quite expensive as the code needs to 
> access the input string to perform character based comparisons
> * Created 4 versions of the same for-loop to a) make the loop simpler to 
> optimize (Vectorization) and b) minimize comparisons
> Benchmarks
> * Lineitem table 100GB
> * Query: select l_returnflag, count(*) from dfs.`` where l_comment 
> not like '%a%' or l_comment like '%the%' group by l_returnflag
> * Before changes: 33sec
> * After changes: 27sec



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (DRILL-2362) Drill should manage Query Profiling archiving

2017-10-23 Thread Padma Penumarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Padma Penumarthy reassigned DRILL-2362:
---

Assignee: Padma Penumarthy

> Drill should manage Query Profiling archiving
> -
>
> Key: DRILL-2362
> URL: https://issues.apache.org/jira/browse/DRILL-2362
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 0.7.0
>Reporter: Chris Westin
>Assignee: Padma Penumarthy
> Fix For: Future
>
>
> We collect query profile information for analysis purposes, but we keep it 
> forever. At this time, for a few queries, it isn't a problem. But as users 
> start putting Drill into production, automated use via other applications 
> will make this grow quickly. We need to come up with a retention policy 
> mechanism, with suitable settings administrators can use, and implement it so 
> that this data can be cleaned up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (DRILL-2861) enhance drill profile file management

2017-10-23 Thread Padma Penumarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-2861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Padma Penumarthy reassigned DRILL-2861:
---

Assignee: Padma Penumarthy

> enhance drill profile file management
> -
>
> Key: DRILL-2861
> URL: https://issues.apache.org/jira/browse/DRILL-2861
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Other
>Affects Versions: 0.9.0
>Reporter: Chun Chang
>Assignee: Padma Penumarthy
> Fix For: Future
>
>
> We need to manage profile files better. Currently each query creates one 
> profile file on the local filesystem of the forman node. You can imagine how 
> this can quickly get out of hand in a production environment.
> We need:
> 1. be able to turn on and off profiling, preferably on the fly
> 2. profiling files should be managed the same as log files
> 3. able to change default file location, for example on a distributed 
> filesystem



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (DRILL-4667) Improve memory footprint of broadcast joins

2017-10-23 Thread Padma Penumarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Padma Penumarthy reassigned DRILL-4667:
---

Assignee: Padma Penumarthy

> Improve memory footprint of broadcast joins
> ---
>
> Key: DRILL-4667
> URL: https://issues.apache.org/jira/browse/DRILL-4667
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
>Affects Versions: 1.6.0
>Reporter: Aman Sinha
>Assignee: Padma Penumarthy
> Fix For: Future
>
>
> For broadcast joins, currently Drill optimizes the data transfer across the 
> network for broadcast table by sending a single copy to the receiving node 
> which then distributes it to all minor fragments running on that particular 
> node.  However, each minor fragment builds its own hash table (for a hash 
> join) using this broadcast table.  We can substantially improve the memory 
> footprint by having a shared copy of the hash table among multiple minor 
> fragments on a node.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5879) Optimize "Like" operator

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215592#comment-16215592
 ] 

ASF GitHub Bot commented on DRILL-5879:
---

Github user sachouche commented on the issue:

https://github.com/apache/drill/pull/1001
  
Paul, again thanks for the detailed review:

- I was able to address most of the feedback except for one
- I agree that expressions that can operate directly on the encoded UTF-8 
string should ideally perform  checks on bytes and not characters
- Having said that, such a change is more involved and should be done 
properly
   o The SqlPatternContainsMatcher currently gets a CharSequence as input
   o We should enhance the expression framework so that matchers can a) 
express their capabilities and b) receive the expected data type (Character or 
Byte sequences)
  o Note also there is an impact on the test-suite since StringBuffer are 
being used to directly test the matcher functionality 


> Optimize "Like" operator
> 
>
> Key: DRILL-5879
> URL: https://issues.apache.org/jira/browse/DRILL-5879
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
> Environment: * 
>Reporter: salim achouche
>Assignee: salim achouche
>Priority: Minor
> Fix For: 1.12.0
>
>
> Query: select  from  where colA like '%a%' or colA like 
> '%xyz%';
> Improvement Opportunities
> # Avoid isAscii computation (full access of the input string) since we're 
> dealing with the same column twice
> # Optimize the "contains" for-loop 
> Implementation Details
> 1)
> * Added a new integer variable "asciiMode" to the VarCharHolder class
> * The default value is -1 which indicates this info is not known
> * Otherwise this value will be set to either 1 or 0 based on the string being 
> in ASCII mode or Unicode
> * The execution plan already shares the same VarCharHolder instance for all 
> evaluations of the same column value
> * The asciiMode will be correctly set during the first LIKE evaluation and 
> will be reused across other LIKE evaluations
> 2) 
> * The "Contains" LIKE operation is quite expensive as the code needs to 
> access the input string to perform character based comparisons
> * Created 4 versions of the same for-loop to a) make the loop simpler to 
> optimize (Vectorization) and b) minimize comparisons
> Benchmarks
> * Lineitem table 100GB
> * Query: select l_returnflag, count(*) from dfs.`` where l_comment 
> not like '%a%' or l_comment like '%the%' group by l_returnflag
> * Before changes: 33sec
> * After changes: 27sec



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5878) TableNotFound exception is being reported for a wrong storage plugin.

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215534#comment-16215534
 ] 

ASF GitHub Bot commented on DRILL-5878:
---

Github user HanumathRao commented on the issue:

https://github.com/apache/drill/pull/996
  
@arina-ielchiieva Thank your for the comments. There is some work that went 
into calcite to handle meaningful error messages. This is the checkin that has 
those changes.

https://github.com/apache/calcite/commit/5f9c019080c7231acaf3df80732d915351051d93#diff-0c11f3f4d738e3fa55968eb19f1c8050

It reports following errors when a table cannot be resolved.
{code}
select empid from "hr".emps;
Object 'EMPS' not found within 'hr'; did you mean 'emps'?
!error
{code}

However, I think the error logic should be customized to particular 
software(in this case DRILL) so as to report semantically meaningful error 
messages. Drill knows more about the context and hence can provide more 
customized error messages to the end user. 



> TableNotFound exception is being reported for a wrong storage plugin.
> -
>
> Key: DRILL-5878
> URL: https://issues.apache.org/jira/browse/DRILL-5878
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Hanumath Rao Maduri
>Assignee: Hanumath Rao Maduri
>Priority: Minor
> Fix For: 1.12.0
>
>
> Drill is reporting TableNotFound exception for a wrong storage plugin. 
> Consider the following query where employee.json is queried using cp plugin.
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`employee.json` limit 10;
> +--++-++--+-+---++-++--++---+-+-++
> | employee_id  | full_name  | first_name  | last_name  | position_id  
> | position_title  | store_id  | department_id  | birth_date  |   
> hire_date|  salary  | supervisor_id  |  education_level  | 
> marital_status  | gender  |  management_role   |
> +--++-++--+-+---++-++--++---+-+-++
> | 1| Sheri Nowmer   | Sheri   | Nowmer | 1
> | President   | 0 | 1  | 1961-08-26  | 
> 1994-12-01 00:00:00.0  | 8.0  | 0  | Graduate Degree   | S
>| F   | Senior Management  |
> | 2| Derrick Whelply| Derrick | Whelply| 2
> | VP Country Manager  | 0 | 1  | 1915-07-03  | 
> 1994-12-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | M
>| M   | Senior Management  |
> | 4| Michael Spence | Michael | Spence | 2
> | VP Country Manager  | 0 | 1  | 1969-06-20  | 
> 1998-01-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | S
>| M   | Senior Management  |
> | 5| Maya Gutierrez | Maya| Gutierrez  | 2
> | VP Country Manager  | 0 | 1  | 1951-05-10  | 
> 1998-01-01 00:00:00.0  | 35000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 6| Roberta Damstra| Roberta | Damstra| 3
> | VP Information Systems  | 0 | 2  | 1942-10-08  | 
> 1994-12-01 00:00:00.0  | 25000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 7| Rebecca Kanagaki   | Rebecca | Kanagaki   | 4
> | VP Human Resources  | 0 | 3  | 1949-03-27  | 
> 1994-12-01 00:00:00.0  | 15000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 8| Kim Brunner| Kim | Brunner| 11   
> | Store Manager   | 9 | 11 | 1922-08-10  | 
> 1998-01-01 00:00:00.0  | 1.0  | 5  | Bachelors Degree  | S
>| F   | Store Management   |
> | 9| Brenda Blumberg| Brenda  | Blumberg   | 11   
> | Store Manager   | 21| 11 | 1979-06-23  | 
> 1998-01-01 00:00:00.0  | 17000.0  | 5  | Graduate Degree   | M
>| F   | Store Management   |
> | 10   | Darren Stanz   | Darren 

[jira] [Commented] (DRILL-5717) change some date time unit cases with specific timezone or Local

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215504#comment-16215504
 ] 

ASF GitHub Bot commented on DRILL-5717:
---

Github user vvysotskyi commented on the issue:

https://github.com/apache/drill/pull/904
  
@weijietong thanks for the explanation of your problem. I was able to 
reproduce it, but also I found working solution. This mock works correctly. The 
problem appears when unit test is run with other tests: 
[DateTimeZone](https://github.com/JodaOrg/joda-time/blob/ba95735daf79d00ce0928f30701d691f5e029d81/src/main/java/org/joda/time/DateTimeZone.java)
 class contains static field 
[cDefault](https://github.com/JodaOrg/joda-time/blob/ba95735daf79d00ce0928f30701d691f5e029d81/src/main/java/org/joda/time/DateTimeZone.java#L128)
 which is used to receive timezone in `testToDateForTimeStamp()` and in other 
tests. When timezone does not set to this field, but method 
[getDefault()](https://github.com/JodaOrg/joda-time/blob/ba95735daf79d00ce0928f30701d691f5e029d81/src/main/java/org/joda/time/DateTimeZone.java#L149)
 is called, value is taken from `System.getProperty()`. 
When the first test that calls this method does not have a mock, it sets a 
real value of timezone from `System.getProperty()`. Therefore our test uses a 
unmocked value from `cDefault`. 

So lets mock `DateTimeZone.getDefault()` method:
```
new MockUp() {
  @Mock
  public DateTimeZone getDefault() {
return DateTimeZone.UTC;
  }
};
```


> change some date time unit cases with specific timezone or Local
> 
>
> Key: DRILL-5717
> URL: https://issues.apache.org/jira/browse/DRILL-5717
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.9.0, 1.11.0
>Reporter: weijie.tong
>
> Some date time test cases like  JodaDateValidatorTest  is not Local 
> independent .This will cause other Local's users's test phase to fail. We 
> should let these test cases to be Local env independent.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5888) jdbc-all-jar unit tests broken because of dependency on hadoop.security

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215436#comment-16215436
 ] 

ASF GitHub Bot commented on DRILL-5888:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1002


> jdbc-all-jar unit tests broken because of dependency on hadoop.security
> ---
>
> Key: DRILL-5888
> URL: https://issues.apache.org/jira/browse/DRILL-5888
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: Parth Chandra
>Assignee: Parth Chandra
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> In some of the build profiles, the jdbc-all-jar is being built with all the 
> hadoop classes excluded. the changes for DRILL-5431 introduced a new 
> dependency on hadoop security and because those classes are not available for 
> jdbc-all-jar, the unit tests fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215437#comment-16215437
 ] 

ASF GitHub Bot commented on DRILL-5862:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/985


> Update project parent pom xml to the latest ASF version
> ---
>
> Key: DRILL-5862
> URL: https://issues.apache.org/jira/browse/DRILL-5862
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.11.0
>Reporter: Vlad Rozov
>Assignee: Vlad Rozov
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5893) Maven forkCount property is too aggressive causing builds to fail on some machines.

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215438#comment-16215438
 ] 

ASF GitHub Bot commented on DRILL-5893:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1003


> Maven forkCount property is too aggressive causing builds to fail on some 
> machines.
> ---
>
> Key: DRILL-5893
> URL: https://issues.apache.org/jira/browse/DRILL-5893
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.12.0
>Reporter: Timothy Farkas
>Assignee: Timothy Farkas
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> In DRILL-5752 I changed the forkCount parameter from "2" to "1C". This 
> changed the number of test process spawned from 2 test processes to 1 test 
> process per core on the machine. This worked fine on dev laptops and jenkins 
> servers, but large build machines (32 cores) can get slowed down by all the 
> test processes resulting in tests timing out. Also spawning so many test 
> processes can also aggravate the issue described in DRILL-5890. 
> For this jira I will revert the default for forkCount back to "2"
> *For documentation*
> https://drill.apache.org/docs/apache-drill-contribution-guidelines/
> For these line {{Contributions should pass existing unit tests.}} we may add 
> that developer should have successful build using {{mvn clean install}}, to 
> speed up unit tests {{mvn clean install -DforkCount=1C}} can be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5876) Remove netty-tcnative inclusion in java-exec/pom.xml

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215439#comment-16215439
 ] 

ASF GitHub Bot commented on DRILL-5876:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1004


> Remove netty-tcnative inclusion in java-exec/pom.xml
> 
>
> Key: DRILL-5876
> URL: https://issues.apache.org/jira/browse/DRILL-5876
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Parth Chandra
>Assignee: Vlad Rozov
>Priority: Minor
>
> The inclusion of netty-tcnative is causing all kinds of problems. The os 
> specific classifier  required is determined by a maven extension which in 
> turn requires an additional eclipse plugin. The eclipse plugin has a problem 
> that may corrupt the current workspace.
> It is safe to not include the dependency since it is required only at 
> runtime. The only case in which this is required is when a developer has to 
> debug SSL/OpenSSL issues in the Java client or the server when launching from 
> within an IDE. In this case, the dependency can be enabled by uncommenting 
> the relevant lines in the pom file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5845) Columns returned by select with "ORDER BY" and "LIMIT" clauses are not in correct order.

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215434#comment-16215434
 ] 

ASF GitHub Bot commented on DRILL-5845:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/1000


> Columns returned by select with "ORDER BY" and "LIMIT" clauses are not in 
> correct order.
> 
>
> Key: DRILL-5845
> URL: https://issues.apache.org/jira/browse/DRILL-5845
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.11.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Column order is proper for queries with only one clause: ORDER BY or LIMIT. 
> For queries with both these clauses column order isn't preserved.
> Test case for reproduce:
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`tpch/nation.parquet` limit 1;
> +--+--+--+--+
> | n_nationkey  |  n_name  | n_regionkey  |  n_comment 
>   |
> +--+--+--+--+
> | 0| ALGERIA  | 0|  haggle. carefully final deposits 
> detect slyly agai  |
> +--+--+--+--+
> 1 row selected (0.181 seconds)
> 0: jdbc:drill:zk=local> select * from cp.`tpch/nation.parquet` order by 
> n_name limit 1;
> +--+--+--+--+
> |  n_comment   |  n_name  | 
> n_nationkey  | n_regionkey  |
> +--+--+--+--+
> |  haggle. carefully final deposits detect slyly agai  | ALGERIA  | 0 
>| 0|
> +--+--+--+--+
> 1 row selected (0.154 seconds)
> {code}
> For json files the column ordering is not preserved as well:
> {code}
> select * from cp.`employee.json` limit 1;
> select * from cp.`employee.json` order by full_name limit 1;
> {code}
> Perhaps the wrong operator for sorting is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5873) Drill C++ Client should throw proper/complete error message for the ODBC driver to consume

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215435#comment-16215435
 ] 

ASF GitHub Bot commented on DRILL-5873:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/992


> Drill C++ Client should throw proper/complete error message for the ODBC 
> driver to consume
> --
>
> Key: DRILL-5873
> URL: https://issues.apache.org/jira/browse/DRILL-5873
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Reporter: Krystal
>Assignee: Parth Chandra
>  Labels: ready-to-commit
>
> The Drill C++ Client should throw a proper/complete error message for the 
> driver to utilize.
> The ODBC driver is directly outputting the exception message thrown by the 
> client by calling the getError() API after the connect() API has failed with 
> an error status.
> For the Java client, similar logic is hard coded at 
> https://github.com/apache/drill/blob/1.11.0/exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserClient.java#L247.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5834) Add Networking Functions

2017-10-23 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5834:

Description: 
On the heels of the PCAP plugin, this is a collection of functions that would 
facilitate network analysis using Drill. 

The functions include:

inet_aton(): Converts an IPv4 address into an integer.
inet_ntoa( ): Converts an integer IP into dotted decimal notation
in_network( , ): Returns true if the IP address is in the given CIDR 
block
address_count(  ): Returns the number of IPs in a given CIDR block
broadcast_address(  ): Returns the broadcast address for a given CIDR 
block
netmask( ): Returns the netmask for a given CIDR block.
low_address(): Returns the first address in a given CIDR block.
high_address(): Returns the last address in a given CIDR block.
url_encode(  ): Returns a URL encoded string.
url_decode(  ): Decodes a URL encoded string.
is_valid_IP(): Returns true if the IP is a valid IP address
is_private_ip(): Returns true if the IP is a private IPv4 address
is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



  was:
On the heels of the PCAP plugin, this is a collection of functions that would 
facilitate network analysis using Drill. 

This is a collection of Networking Functions to facilitate network analysis. 
The functions include:

inet_aton(): Converts an IPv4 address into an integer.
inet_ntoa( ): Converts an integer IP into dotted decimal notation
in_network( , ): Returns true if the IP address is in the given CIDR 
block
address_count(  ): Returns the number of IPs in a given CIDR block
broadcast_address(  ): Returns the broadcast address for a given CIDR 
block
netmask( ): Returns the netmask for a given CIDR block.
low_address(): Returns the first address in a given CIDR block.
high_address(): Returns the last address in a given CIDR block.
url_encode(  ): Returns a URL encoded string.
url_decode(  ): Decodes a URL encoded string.
is_valid_IP(): Returns true if the IP is a valid IP address
is_private_ip(): Returns true if the IP is a private IPv4 address
is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
is_valid_IPv6(): Returns true if the IP is a valid IPv6 address




> Add Networking Functions
> 
>
> Key: DRILL-5834
> URL: https://issues.apache.org/jira/browse/DRILL-5834
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Minor
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> On the heels of the PCAP plugin, this is a collection of functions that would 
> facilitate network analysis using Drill. 
> The functions include:
> inet_aton(): Converts an IPv4 address into an integer.
> inet_ntoa( ): Converts an integer IP into dotted decimal notation
> in_network( , ): Returns true if the IP address is in the given 
> CIDR block
> address_count(  ): Returns the number of IPs in a given CIDR block
> broadcast_address(  ): Returns the broadcast address for a given CIDR 
> block
> netmask( ): Returns the netmask for a given CIDR block.
> low_address(): Returns the first address in a given CIDR block.
> high_address(): Returns the last address in a given CIDR block.
> url_encode(  ): Returns a URL encoded string.
> url_decode(  ): Decodes a URL encoded string.
> is_valid_IP(): Returns true if the IP is a valid IP address
> is_private_ip(): Returns true if the IP is a private IPv4 address
> is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
> is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5834) Add Networking Functions

2017-10-23 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5834:

Labels: doc-impacting ready-to-commit  (was: doc-impacting)

> Add Networking Functions
> 
>
> Key: DRILL-5834
> URL: https://issues.apache.org/jira/browse/DRILL-5834
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Minor
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.12.0
>
>
> On the heels of the PCAP plugin, this is a collection of functions that would 
> facilitate network analysis using Drill. 
> This is a collection of Networking Functions to facilitate network analysis. 
> The functions include:
> inet_aton(): Converts an IPv4 address into an integer.
> inet_ntoa( ): Converts an integer IP into dotted decimal notation
> in_network( , ): Returns true if the IP address is in the given 
> CIDR block
> address_count(  ): Returns the number of IPs in a given CIDR block
> broadcast_address(  ): Returns the broadcast address for a given CIDR 
> block
> netmask( ): Returns the netmask for a given CIDR block.
> low_address(): Returns the first address in a given CIDR block.
> high_address(): Returns the last address in a given CIDR block.
> url_encode(  ): Returns a URL encoded string.
> url_decode(  ): Decodes a URL encoded string.
> is_valid_IP(): Returns true if the IP is a valid IP address
> is_private_ip(): Returns true if the IP is a private IPv4 address
> is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
> is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5834) Add Networking Functions

2017-10-23 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5834:

Description: 
On the heels of the PCAP plugin, this is a collection of functions that would 
facilitate network analysis using Drill. 

This is a collection of Networking Functions to facilitate network analysis. 
The functions include:

inet_aton(): Converts an IPv4 address into an integer.
inet_ntoa( ): Converts an integer IP into dotted decimal notation
in_network( , ): Returns true if the IP address is in the given CIDR 
block
address_count(  ): Returns the number of IPs in a given CIDR block
broadcast_address(  ): Returns the broadcast address for a given CIDR 
block
netmask( ): Returns the netmask for a given CIDR block.
low_address(): Returns the first address in a given CIDR block.
high_address(): Returns the last address in a given CIDR block.
url_encode(  ): Returns a URL encoded string.
url_decode(  ): Decodes a URL encoded string.
is_valid_IP(): Returns true if the IP is a valid IP address
is_private_ip(): Returns true if the IP is a private IPv4 address
is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



  was:On the heels of the PCAP plugin, this is a collection of functions that 
would facilitate network analysis using Drill. 


> Add Networking Functions
> 
>
> Key: DRILL-5834
> URL: https://issues.apache.org/jira/browse/DRILL-5834
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 1.12.0
>
>
> On the heels of the PCAP plugin, this is a collection of functions that would 
> facilitate network analysis using Drill. 
> This is a collection of Networking Functions to facilitate network analysis. 
> The functions include:
> inet_aton(): Converts an IPv4 address into an integer.
> inet_ntoa( ): Converts an integer IP into dotted decimal notation
> in_network( , ): Returns true if the IP address is in the given 
> CIDR block
> address_count(  ): Returns the number of IPs in a given CIDR block
> broadcast_address(  ): Returns the broadcast address for a given CIDR 
> block
> netmask( ): Returns the netmask for a given CIDR block.
> low_address(): Returns the first address in a given CIDR block.
> high_address(): Returns the last address in a given CIDR block.
> url_encode(  ): Returns a URL encoded string.
> url_decode(  ): Decodes a URL encoded string.
> is_valid_IP(): Returns true if the IP is a valid IP address
> is_private_ip(): Returns true if the IP is a private IPv4 address
> is_valid_IPv4(): Returns true if the IP is a valid IPv4 address
> is_valid_IPv6(): Returns true if the IP is a valid IPv6 address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5772) Enable UTF-8 support in query string by default

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215134#comment-16215134
 ] 

ASF GitHub Bot commented on DRILL-5772:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/936
  
@paul-rogers Calcite community has approved my changes and they have been 
already cherry-picked into Drill Calcite branch. I have updated pull request to 
reflect this recent changes. Now utf-8 support in query string will be enabled 
by default and controlled using saffron.properties file.


> Enable UTF-8 support in query string by default
> ---
>
> Key: DRILL-5772
> URL: https://issues.apache.org/jira/browse/DRILL-5772
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.11.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>  Labels: doc-impacting
> Fix For: 1.12.0
>
>
> Add unit test to indicated how utf-8 support can be enabled in Drill.
> To select utf-8 data user needs to update system property 
> {{saffron.default.charset}} to {{UTF-16LE}} before starting the drillbit. 
> Calcite uses this property to get default charset, if it is not set then 
> {{ISO-8859-1}} is used by default. Drill gets default charset from Calcite.
> This information should be also documented, probably in 
> https://drill.apache.org/docs/data-type-conversion/.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5878) TableNotFound exception is being reported for a wrong storage plugin.

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215023#comment-16215023
 ] 

ASF GitHub Bot commented on DRILL-5878:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/996
  
Well, with the short-term solution I am afraid that eventually we'll forget 
to revert it when Calcite checks take over and we'll be checking for schema 
twice.

@HanumathRao before going any further with this pull request, could you 
please investigate the following:
1. Does newer Calcite versions handle this case?
2. If not, can this be handled in Calcite? If yes, please create Jira in 
Calcite. If possible create fix and check if Calcite community will accept it.




> TableNotFound exception is being reported for a wrong storage plugin.
> -
>
> Key: DRILL-5878
> URL: https://issues.apache.org/jira/browse/DRILL-5878
> Project: Apache Drill
>  Issue Type: Bug
>  Components: SQL Parser
>Affects Versions: 1.11.0
>Reporter: Hanumath Rao Maduri
>Assignee: Hanumath Rao Maduri
>Priority: Minor
> Fix For: 1.12.0
>
>
> Drill is reporting TableNotFound exception for a wrong storage plugin. 
> Consider the following query where employee.json is queried using cp plugin.
> {code}
> 0: jdbc:drill:zk=local> select * from cp.`employee.json` limit 10;
> +--++-++--+-+---++-++--++---+-+-++
> | employee_id  | full_name  | first_name  | last_name  | position_id  
> | position_title  | store_id  | department_id  | birth_date  |   
> hire_date|  salary  | supervisor_id  |  education_level  | 
> marital_status  | gender  |  management_role   |
> +--++-++--+-+---++-++--++---+-+-++
> | 1| Sheri Nowmer   | Sheri   | Nowmer | 1
> | President   | 0 | 1  | 1961-08-26  | 
> 1994-12-01 00:00:00.0  | 8.0  | 0  | Graduate Degree   | S
>| F   | Senior Management  |
> | 2| Derrick Whelply| Derrick | Whelply| 2
> | VP Country Manager  | 0 | 1  | 1915-07-03  | 
> 1994-12-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | M
>| M   | Senior Management  |
> | 4| Michael Spence | Michael | Spence | 2
> | VP Country Manager  | 0 | 1  | 1969-06-20  | 
> 1998-01-01 00:00:00.0  | 4.0  | 1  | Graduate Degree   | S
>| M   | Senior Management  |
> | 5| Maya Gutierrez | Maya| Gutierrez  | 2
> | VP Country Manager  | 0 | 1  | 1951-05-10  | 
> 1998-01-01 00:00:00.0  | 35000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 6| Roberta Damstra| Roberta | Damstra| 3
> | VP Information Systems  | 0 | 2  | 1942-10-08  | 
> 1994-12-01 00:00:00.0  | 25000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 7| Rebecca Kanagaki   | Rebecca | Kanagaki   | 4
> | VP Human Resources  | 0 | 3  | 1949-03-27  | 
> 1994-12-01 00:00:00.0  | 15000.0  | 1  | Bachelors Degree  | M
>| F   | Senior Management  |
> | 8| Kim Brunner| Kim | Brunner| 11   
> | Store Manager   | 9 | 11 | 1922-08-10  | 
> 1998-01-01 00:00:00.0  | 1.0  | 5  | Bachelors Degree  | S
>| F   | Store Management   |
> | 9| Brenda Blumberg| Brenda  | Blumberg   | 11   
> | Store Manager   | 21| 11 | 1979-06-23  | 
> 1998-01-01 00:00:00.0  | 17000.0  | 5  | Graduate Degree   | M
>| F   | Store Management   |
> | 10   | Darren Stanz   | Darren  | Stanz  | 5
> | VP Finance  | 0 | 5  | 1949-08-26  | 
> 1994-12-01 00:00:00.0  | 5.0  | 1  | Partial College   | M
>| M   | Senior Management  |
> | 11   | Jonathan 

[jira] [Updated] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5895:

Description: 
Mongo tests finish with following exception intermittently. It happens because  
[timeout 
value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
 from   de.flapdoodle.embed.process library is too low for the mongod process 
to be stopped gracefully. 

I have created 
[issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64] 
to suggest making timeout configurable. For now as temporary solution we will 
log mongod exception instead of throwing it.

{noformat}
[mongod output] Exception in thread "Thread-25" 
java.lang.IllegalStateException: Couldn't kill mongod process!


Something bad happend. We couldn't kill mongod process, and tried a lot.
If you want this problem solved you can help us if you open a new issue.

Follow this link:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues

Thank you:)



at 
de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
at 
de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
at 
de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
at 
de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
at 
de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't kill 
mongod process!


Something bad happend. We couldn't kill mongod process, and tried a lot.
If you want this problem solved you can help us if you open a new issue.

Follow this link:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues

Thank you:)



at 
de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
at 
de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
at 
de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
at 
de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
at 
de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
at java.lang.Thread.run(Thread.java:745)

Results :

Tests in error: 
  MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
process!...

{noformat}

  was:
Mongo tests finish with following exception intermittently. It happens because  
[timeout 
value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
 from   de.flapdoodle.embed.process library is too low for the mongod process 
to be stopped gracefully. 

I have create 
[issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64] 
to suggest making timeout configurable. For now as temporary solution we will 
log mongod exception instead of throwing it.

{noformat}
[mongod output] Exception in thread "Thread-25" 
java.lang.IllegalStateException: Couldn't kill mongod process!


Something bad happend. We couldn't kill mongod process, and tried a lot.
If you want this problem solved you can help us if you open a new issue.

Follow this link:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues

Thank you:)



at 
de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
at 
de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
at 
de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
at 
de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
at 

[jira] [Updated] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5895:

Reviewer: Arina Ielchiieva

> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> I have create 
> [issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64]
>  to suggest making timeout configurable. For now as temporary solution we 
> will log mongod exception instead of throwing it.
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5895:

Labels: ready-to-commit  (was: )

> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
>  Labels: ready-to-commit
> Fix For: 1.12.0
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> I have create 
> [issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64]
>  to suggest making timeout configurable. For now as temporary solution we 
> will log mongod exception instead of throwing it.
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214940#comment-16214940
 ] 

ASF GitHub Bot commented on DRILL-5895:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/1006
  
+1. LGTM.


> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
> Fix For: 1.12.0
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> I have create 
> [issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64]
>  to suggest making timeout configurable. For now as temporary solution we 
> will log mongod exception instead of throwing it.
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread Volodymyr Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Tkach updated DRILL-5895:
---
Fix Version/s: (was: Future)
   1.12.0

> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
> Fix For: 1.12.0
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> I have create 
> [issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64]
>  to suggest making timeout configurable. For now as temporary solution we 
> will log mongod exception instead of throwing it.
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5887) Display process user/ groups in Drill UI

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214925#comment-16214925
 ] 

ASF GitHub Bot commented on DRILL-5887:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/998#discussion_r146222565
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/DrillRoot.java 
---
@@ -85,19 +86,24 @@ public ClusterInfo getClusterInfoJSON() {
 // For all other cases the user info need-not or should-not be 
displayed
 OptionManager optionManager = work.getContext().getOptionManager();
 final boolean isUserLoggedIn = AuthDynamicFeature.isUserLoggedIn(sc);
-String adminUsers = isUserLoggedIn ?
-
ExecConstants.ADMIN_USERS_VALIDATOR.getAdminUsers(optionManager) : null;
-String adminUserGroups = isUserLoggedIn ?
-
ExecConstants.ADMIN_USER_GROUPS_VALIDATOR.getAdminUserGroups(optionManager) : 
null;
+final String processUser = ImpersonationUtil.getProcessUserName();
+final String processUserGroups = Joiner.on(", 
").join(ImpersonationUtil.getProcessUserGroupNames());
+String adminUsers = 
ExecConstants.ADMIN_USERS_VALIDATOR.getAdminUsers(optionManager);
+String adminUserGroups = 
ExecConstants.ADMIN_USER_GROUPS_VALIDATOR.getAdminUserGroups(optionManager);
 
 // separate groups by comma + space
-if (adminUsers != null) {
+if (adminUsers.length() == 0) {
--- End diff --

This would be adherence to the MVC pattern which separates model from the 
view. Model is generated on the server side while freemarker is responsible for 
representation layer.


> Display process user/ groups in Drill UI
> 
>
> Key: DRILL-5887
> URL: https://issues.apache.org/jira/browse/DRILL-5887
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - HTTP
>Affects Versions: 1.11.0
>Reporter: Prasad Nagaraj Subramanya
>Assignee: Prasad Nagaraj Subramanya
>Priority: Minor
> Fix For: 1.12.0
>
>
> Drill UI only lists admin user/ groups specified as options
> We should display the process user/ groups who have admin privilege



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214870#comment-16214870
 ] 

ASF GitHub Bot commented on DRILL-5895:
---

GitHub user vladimirtkach opened a pull request:

https://github.com/apache/drill/pull/1006

DRILL-5895: Add logging mongod exception when failed to close all mon…

…god processes during provided timeout

Details in [DRILL-5895](https://issues.apache.org/jira/browse/DRILL-5895).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vladimirtkach/drill DRILL-5895

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/1006.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1006


commit c5c0a6b3a54649be1bd8e29617d54e66dd51190b
Author: Volodymyr Tkach 
Date:   2017-10-23T09:10:57Z

DRILL-5895: Add logging mongod exception when failed to close all mongod 
processes during provided timeout




> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
> Fix For: Future
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> I have create 
> [issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64]
>  to suggest making timeout configurable. For now as temporary solution we 
> will log mongod exception instead of throwing it.
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread Volodymyr Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Tkach updated DRILL-5895:
---
Description: 
Mongo tests finish with following exception intermittently. It happens because  
[timeout 
value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
 from   de.flapdoodle.embed.process library is too low for the mongod process 
to be stopped gracefully. 

I have create 
[issue|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/issues/64] 
to suggest making timeout configurable. For now as temporary solution we will 
log mongod exception instead of throwing it.

{noformat}
[mongod output] Exception in thread "Thread-25" 
java.lang.IllegalStateException: Couldn't kill mongod process!


Something bad happend. We couldn't kill mongod process, and tried a lot.
If you want this problem solved you can help us if you open a new issue.

Follow this link:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues

Thank you:)



at 
de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
at 
de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
at 
de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
at 
de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
at 
de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't kill 
mongod process!


Something bad happend. We couldn't kill mongod process, and tried a lot.
If you want this problem solved you can help us if you open a new issue.

Follow this link:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues

Thank you:)



at 
de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
at 
de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
at 
de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
at 
de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
at 
de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
at java.lang.Thread.run(Thread.java:745)

Results :

Tests in error: 
  MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
process!...

{noformat}

  was:
Mongo tests finish with following exception intermittently. It happens because  
[timeout 
value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
 from   de.flapdoodle.embed.process library is too low for the mongod process 
to be stopped gracefully. 

{noformat}
[mongod output] Exception in thread "Thread-25" 
java.lang.IllegalStateException: Couldn't kill mongod process!


Something bad happend. We couldn't kill mongod process, and tried a lot.
If you want this problem solved you can help us if you open a new issue.

Follow this link:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues

Thank you:)



at 
de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
at 
de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
at 
de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
at 
de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
at 
de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
at 
de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't kill 
mongod process!


[jira] [Updated] (DRILL-5895) Fix unit tests for mongo storage plugin

2017-10-23 Thread Volodymyr Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Tkach updated DRILL-5895:
---
Affects Version/s: 1.11.0

> Fix unit tests for mongo storage plugin
> ---
>
> Key: DRILL-5895
> URL: https://issues.apache.org/jira/browse/DRILL-5895
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - MongoDB
>Affects Versions: 1.11.0
>Reporter: Volodymyr Tkach
>Assignee: Volodymyr Tkach
> Fix For: Future
>
>
> Mongo tests finish with following exception intermittently. It happens 
> because  [timeout 
> value|https://github.com/flapdoodle-oss/de.flapdoodle.embed.process/blob/1.7/src/main/java/de/flapdoodle/embed/process/runtime/ProcessControl.java#L132]
>  from   de.flapdoodle.embed.process library is too low for the mongod process 
> to be stopped gracefully. 
> {noformat}
> [mongod output] Exception in thread "Thread-25" 
> java.lang.IllegalStateException: Couldn't kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "Thread-13" java.lang.IllegalStateException: Couldn't 
> kill mongod process!
> 
> Something bad happend. We couldn't kill mongod process, and tried a lot.
> If you want this problem solved you can help us if you open a new issue.
> Follow this link:
> https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues
> Thank you:)
> 
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.waitForProcessGotKilled(ProcessControl.java:192)
>   at 
> de.flapdoodle.embed.process.runtime.ProcessControl.stop(ProcessControl.java:76)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stopProcess(AbstractProcess.java:189)
>   at 
> de.flapdoodle.embed.mongo.AbstractMongoProcess.stopInternal(AbstractMongoProcess.java:117)
>   at 
> de.flapdoodle.embed.process.runtime.AbstractProcess.stop(AbstractProcess.java:170)
>   at 
> de.flapdoodle.embed.process.runtime.Executable.stop(Executable.java:73)
>   at 
> de.flapdoodle.embed.process.runtime.Executable$JobKiller.run(Executable.java:90)
>   at java.lang.Thread.run(Thread.java:745)
> Results :
> Tests in error: 
>   MongoTestSuit.tearDownCluster:260 » IllegalState Couldn't kill mongod 
> process!...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)