[jira] [Updated] (DRILL-7417) Add user logged in/out event in info level logs

2019-10-24 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Reviewer: Arina Ielchiieva

> Add user logged in/out event in info level logs
> ---
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Security
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>
> Sample output logs:
> WebUser:
> Note: for WebUser log in/out events the port may be different since Web based 
> connection is stateless.
> {code:java}
> 2019-10-22 13:47:24,888 [qtp480678786-70] INFO 
> o.a.d.e.s.r.a.DrillRestLoginService - WebUser alice logged in from 
> 172.30.8.49:60558
> 2019-10-22 13:47:30,508 [qtp480678786-64] INFO 
> o.a.d.e.s.rest.LogInLogOutResources - WebUser alice logged out from 
> 172.30.8.49:60567{code}
> JDBC/ODBC:
> {code:java}
> 2019-10-22 13:48:16,977 [UserServer-1] INFO 
> o.a.drill.exec.rpc.user.UserServer - User alice logged in from 
> /10.10.100.163:59846
> 2019-10-22 13:48:19,858 [UserServer-1] INFO 
> o.a.drill.exec.rpc.user.UserServer - User alice logged out from 
> /10.10.100.163:59846{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7417) Add user logged in/out event in info level logs

2019-10-24 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Description: 
Sample output logs:
WebUser:

Note: for WebUser log in/out events the port may be different since Web based 
connection is stateless.
{code:java}
2019-10-22 13:47:24,888 [qtp480678786-70] INFO 
o.a.d.e.s.r.a.DrillRestLoginService - WebUser alice logged in from 
172.30.8.49:60558
2019-10-22 13:47:30,508 [qtp480678786-64] INFO 
o.a.d.e.s.rest.LogInLogOutResources - WebUser alice logged out from 
172.30.8.49:60567{code}
JDBC/ODBC:
{code:java}
2019-10-22 13:48:16,977 [UserServer-1] INFO o.a.drill.exec.rpc.user.UserServer 
- User alice logged in from /10.10.100.163:59846
2019-10-22 13:48:19,858 [UserServer-1] INFO o.a.drill.exec.rpc.user.UserServer 
- User alice logged out from /10.10.100.163:59846{code}

> Add user logged in/out event in info level logs
> ---
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Security
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>
> Sample output logs:
> WebUser:
> Note: for WebUser log in/out events the port may be different since Web based 
> connection is stateless.
> {code:java}
> 2019-10-22 13:47:24,888 [qtp480678786-70] INFO 
> o.a.d.e.s.r.a.DrillRestLoginService - WebUser alice logged in from 
> 172.30.8.49:60558
> 2019-10-22 13:47:30,508 [qtp480678786-64] INFO 
> o.a.d.e.s.rest.LogInLogOutResources - WebUser alice logged out from 
> 172.30.8.49:60567{code}
> JDBC/ODBC:
> {code:java}
> 2019-10-22 13:48:16,977 [UserServer-1] INFO 
> o.a.drill.exec.rpc.user.UserServer - User alice logged in from 
> /10.10.100.163:59846
> 2019-10-22 13:48:19,858 [UserServer-1] INFO 
> o.a.drill.exec.rpc.user.UserServer - User alice logged out from 
> /10.10.100.163:59846{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7417) Add user logged in/out event in info level logs

2019-10-22 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Component/s: Security

> Add user logged in/out event in info level logs
> ---
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Security
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7417) Add user logged in/out event in info level logs

2019-10-22 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Summary: Add user logged in/out event in info level logs  (was: Test Task)

> Add user logged in/out event in info level logs
> ---
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (DRILL-7417) Test Task

2019-10-22 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia reopened DRILL-7417:
--
  Assignee: Sorabh Hamirwasia

> Test Task
> -
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7417) Test Task

2019-10-22 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Issue Type: Improvement  (was: Task)

> Test Task
> -
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (DRILL-7417) Test Task

2019-10-21 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia closed DRILL-7417.

Resolution: Invalid

> Test Task
> -
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7417) Test Task

2019-10-21 Thread Sorabh Hamirwasia (Jira)
Sorabh Hamirwasia created DRILL-7417:


 Summary: Test Task
 Key: DRILL-7417
 URL: https://issues.apache.org/jira/browse/DRILL-7417
 Project: Apache Drill
  Issue Type: Task
Reporter: Sorabh Hamirwasia






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7417) Test Task

2019-10-21 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Attachment: Test.rtf

> Test Task
> -
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7417) Test Task

2019-10-21 Thread Sorabh Hamirwasia (Jira)


 [ 
https://issues.apache.org/jira/browse/DRILL-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7417:
-
Attachment: (was: Test.rtf)

> Test Task
> -
>
> Key: DRILL-7417
> URL: https://issues.apache.org/jira/browse/DRILL-7417
> Project: Apache Drill
>  Issue Type: Task
>Reporter: Sorabh Hamirwasia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7411) DRILL 1.16

2019-10-18 Thread Sorabh Hamirwasia (Jira)
Sorabh Hamirwasia created DRILL-7411:


 Summary: DRILL 1.16
 Key: DRILL-7411
 URL: https://issues.apache.org/jira/browse/DRILL-7411
 Project: Apache Drill
  Issue Type: Sub-task
Reporter: Sorabh Hamirwasia


Design document for following features are added in this JIRA:

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7410) Design Documents

2019-10-18 Thread Sorabh Hamirwasia (Jira)
Sorabh Hamirwasia created DRILL-7410:


 Summary: Design Documents
 Key: DRILL-7410
 URL: https://issues.apache.org/jira/browse/DRILL-7410
 Project: Apache Drill
  Issue Type: Task
Reporter: Sorabh Hamirwasia


This Jira is created to track the design documents available for all the 
features developed in Apache Drill. It serves as an index for easy access of 
these document for future reference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DRILL-7164) KafkaFilterPushdownTest is sometimes failing to pattern match correctly.

2019-05-07 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7164:
-
Reviewer: Hanumath Rao M

> KafkaFilterPushdownTest is sometimes failing to pattern match correctly.
> 
>
> Key: DRILL-7164
> URL: https://issues.apache.org/jira/browse/DRILL-7164
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Kafka
>Affects Versions: 1.16.0
>Reporter: Hanumath Rao Maduri
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> On my private build I am hitting kafka storage tests issue intermittently. 
> Here is the issue which I came across.
> {code}
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]
> 15:01:39.852 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -292 B(75.4 KiB), h: -391.1 MiB(240.7 MiB), nh: 824.5 KiB(129.0 MiB)): 
> testPushdownOffsetOneRecordReturnedWithBoundaryConditions(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 5000,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:56524",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`", "`kafkaMsgOffset`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 5.0
> }
>   }, {
> "pop" : "project",
> "@id" : 5,
> "exprs" : [ {
>   "ref" : "`T23¦¦**`",
>   "expr" : "`**`"
> }, {
>   "ref" : "`kafkaMsgOffset`",
>   "expr" : "`kafkaMsgOffset`"
> } ],
> "child" : 6,
> "outputProj" : false,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 5.0
> }
>   }, {
> "pop" : "filter",
> "@id" : 4,
> "child" : 5,
> "expr" : "equal(`kafkaMsgOffset`, 9) ",
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 0.75
> }
>   }, {
> "pop" : "selection-vector-remover",
> "@id" : 3,
> "child" : 4,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "project",
> "@id" : 2,
> "exprs" : [ {
>   "ref" : "`T23¦¦**`",
>   "expr" : "`T23¦¦**`"
> } ],
> "child" : 3,
> "outputProj" : false,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "project",
> "@id" : 1,
> "exprs" : [ {
>   "ref" : "`**`",
>   "expr" : "`T23¦¦**`"
> } ],
> "child" : 2,
> "outputProj" : true,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "screen",
> "@id" : 0,
> "child" : 1,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   } ]
> }!
> {code}

[jira] [Updated] (DRILL-7164) KafkaFilterPushdownTest is sometimes failing to pattern match correctly.

2019-05-07 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7164:
-
Labels: ready-to-commit  (was: )

> KafkaFilterPushdownTest is sometimes failing to pattern match correctly.
> 
>
> Key: DRILL-7164
> URL: https://issues.apache.org/jira/browse/DRILL-7164
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Kafka
>Affects Versions: 1.16.0
>Reporter: Hanumath Rao Maduri
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> On my private build I am hitting kafka storage tests issue intermittently. 
> Here is the issue which I came across.
> {code}
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]
> 15:01:39.852 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -292 B(75.4 KiB), h: -391.1 MiB(240.7 MiB), nh: 824.5 KiB(129.0 MiB)): 
> testPushdownOffsetOneRecordReturnedWithBoundaryConditions(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 5000,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:56524",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`", "`kafkaMsgOffset`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 5.0
> }
>   }, {
> "pop" : "project",
> "@id" : 5,
> "exprs" : [ {
>   "ref" : "`T23¦¦**`",
>   "expr" : "`**`"
> }, {
>   "ref" : "`kafkaMsgOffset`",
>   "expr" : "`kafkaMsgOffset`"
> } ],
> "child" : 6,
> "outputProj" : false,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 5.0
> }
>   }, {
> "pop" : "filter",
> "@id" : 4,
> "child" : 5,
> "expr" : "equal(`kafkaMsgOffset`, 9) ",
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 0.75
> }
>   }, {
> "pop" : "selection-vector-remover",
> "@id" : 3,
> "child" : 4,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "project",
> "@id" : 2,
> "exprs" : [ {
>   "ref" : "`T23¦¦**`",
>   "expr" : "`T23¦¦**`"
> } ],
> "child" : 3,
> "outputProj" : false,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "project",
> "@id" : 1,
> "exprs" : [ {
>   "ref" : "`**`",
>   "expr" : "`T23¦¦**`"
> } ],
> "child" : 2,
> "outputProj" : true,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "screen",
> "@id" : 0,
> "child" : 1,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   } ]
> }!

[jira] [Updated] (DRILL-7199) Optimize the time taken to populate column statistics for non-interesting columns

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7199:
-
Component/s: Metadata

> Optimize the time taken to populate column statistics for non-interesting 
> columns
> -
>
> Key: DRILL-7199
> URL: https://issues.apache.org/jira/browse/DRILL-7199
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Currently populating column statistics for non-interesting columns very long 
> since it is populated for every row group. Since non-interesting column 
> statistics are common for the table, it can be populated once and can be 
> reused.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7171) Count(*) query on leaf level directory is not reading summary cache file.

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7171:
-
Affects Version/s: 1.16.0

> Count(*) query on leaf level directory is not reading summary cache file.
> -
>
> Key: DRILL-7171
> URL: https://issues.apache.org/jira/browse/DRILL-7171
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Since the leaf level directory doesn't store the metadata directories file, 
> while reading summary if the directories cache file is not present, it is 
> assumed that the cache is possibly corrupt and reading of the summary cache 
> file is skipped. Metadata directories cache file should be created at the 
> leaf level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7171) Count(*) query on leaf level directory is not reading summary cache file.

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7171:
-
Component/s: Metadata

> Count(*) query on leaf level directory is not reading summary cache file.
> -
>
> Key: DRILL-7171
> URL: https://issues.apache.org/jira/browse/DRILL-7171
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.16.0
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Since the leaf level directory doesn't store the metadata directories file, 
> while reading summary if the directories cache file is not present, it is 
> assumed that the cache is possibly corrupt and reading of the summary cache 
> file is skipped. Metadata directories cache file should be created at the 
> leaf level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7225) Merging of columnTypeInfo for file with different schema throws NullPointerException during refresh metadata

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7225:
-
Affects Version/s: 1.16.0

> Merging of columnTypeInfo for file with different schema throws 
> NullPointerException during refresh metadata
> 
>
> Key: DRILL-7225
> URL: https://issues.apache.org/jira/browse/DRILL-7225
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> Merging of columnTypeInfo from two files with different schemas throws 
> nullpointerexception. For example if a directory Orders has two files:
>  * orders.parquet (with columns order_id, order_name, order_date)
>  * orders_with_address.parquet (with columns order_id, order_name, address)
> When refresh table metadata is triggered, metadata such as total_null_count 
> for columns in both the files is aggregated and updated in the 
> ColumnTypeInfo. Initially ColumnTypeInfo is initialized with the first file's 
> ColumnTypeInfo (i.e., order_id, order_name, order_date). While aggregating, 
> the existing ColumnTypeInfo is looked up for columns in the second file and 
> since some of them don't exist in the ColumnTypeInfo, a npe is thrown. This 
> can be fixed by initializing ColumnTypeInfo for columns that are not yet 
> present.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7225) Merging of columnTypeInfo for file with different schema throws NullPointerException during refresh metadata

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7225:
-
Component/s: Metadata

> Merging of columnTypeInfo for file with different schema throws 
> NullPointerException during refresh metadata
> 
>
> Key: DRILL-7225
> URL: https://issues.apache.org/jira/browse/DRILL-7225
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.16.0
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> Merging of columnTypeInfo from two files with different schemas throws 
> nullpointerexception. For example if a directory Orders has two files:
>  * orders.parquet (with columns order_id, order_name, order_date)
>  * orders_with_address.parquet (with columns order_id, order_name, address)
> When refresh table metadata is triggered, metadata such as total_null_count 
> for columns in both the files is aggregated and updated in the 
> ColumnTypeInfo. Initially ColumnTypeInfo is initialized with the first file's 
> ColumnTypeInfo (i.e., order_id, order_name, order_date). While aggregating, 
> the existing ColumnTypeInfo is looked up for columns in the second file and 
> since some of them don't exist in the ColumnTypeInfo, a npe is thrown. This 
> can be fixed by initializing ColumnTypeInfo for columns that are not yet 
> present.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7148) TPCH query 17 increases execution time with Statistics enabled because join order is changed

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832850#comment-16832850
 ] 

Sorabh Hamirwasia commented on DRILL-7148:
--

[~gparai] - With this commit I am seeing plan verification failures for 2 
queries in [Functional 
Tests|http://10.10.104.91:8080/job/Apache_Drill_Functional_Tests/1823/console]. 
After removing this commit from my merge branch the tests are fine. So for now 
I am removing the ready-to-commit tag from this JIRA. Can you please look into 
the failures meanwhile ?

> TPCH query 17 increases execution time with Statistics enabled because join 
> order is changed
> 
>
> Key: DRILL-7148
> URL: https://issues.apache.org/jira/browse/DRILL-7148
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Gautam Parai
>Assignee: Gautam Parai
>Priority: Major
> Fix For: 1.17.0
>
>
> TPCH query 17 with sf 1000 runs 45% slower. One issue is that the join order 
> has flipped the build side and the probe side in Major Fragment 01.
> Here is the query:
> select
>  sum(l.l_extendedprice) / 7.0 as avg_yearly
> from
>  lineitem l,
>  part p
> where
>  p.p_partkey = l.l_partkey
>  and p.p_brand = 'Brand#13'
>  and p.p_container = 'JUMBO CAN'
>  and l.l_quantity < (
>  select
>  0.2 * avg(l2.l_quantity)
>  from
>  lineitem l2
>  where
>  l2.l_partkey = p.p_partkey
>  );
> Here is original plan:
> {noformat}
> 00-00 Screen : rowType = RecordType(ANY avg_yearly): rowcount = 1.0, 
> cumulative cost = \{7.853786601428E10 rows, 6.6179786770537E11 cpu, 
> 3.0599948545E10 io, 1.083019457355776E14 network, 1.17294998955024E11 
> memory}, id = 489493
> 00-01 Project(avg_yearly=[/($0, 7.0)]) : rowType = RecordType(ANY 
> avg_yearly): rowcount = 1.0, cumulative cost = \{7.853786601418E10 rows, 
> 6.6179786770527E11 cpu, 3.0599948545E10 io, 1.083019457355776E14 network, 
> 1.17294998955024E11 memory}, id = 489492
> 00-02 StreamAgg(group=[{}], agg#0=[SUM($0)]) : rowType = RecordType(ANY $f0): 
> rowcount = 1.0, cumulative cost = \{7.853786601318E10 rows, 
> 6.6179786770127E11 cpu, 3.0599948545E10 io, 1.083019457355776E14 network, 
> 1.17294998955024E11 memory}, id = 489491
> 00-03 UnionExchange : rowType = RecordType(ANY $f0): rowcount = 1.0, 
> cumulative cost = \{7.853786601218E10 rows, 6.6179786768927E11 cpu, 
> 3.0599948545E10 io, 1.083019457355776E14 network, 1.17294998955024E11 
> memory}, id = 489490
> 01-01 StreamAgg(group=[{}], agg#0=[SUM($0)]) : rowType = RecordType(ANY $f0): 
> rowcount = 1.0, cumulative cost = \{7.853786601118E10 rows, 
> 6.6179786768127E11 cpu, 3.0599948545E10 io, 1.083019457314816E14 network, 
> 1.17294998955024E11 memory}, id = 489489
> 01-02 Project(l_extendedprice=[$1]) : rowType = RecordType(ANY 
> l_extendedprice): rowcount = 2.948545E9, cumulative cost = 
> \{7.553787115668E10 rows, 6.2579792942727E11 cpu, 3.0599948545E10 io, 
> 1.083019457314816E14 network, 1.17294998955024E11 memory}, id = 489488
> 01-03 SelectionVectorRemover : rowType = RecordType(ANY l_quantity, ANY 
> l_extendedprice, ANY p_partkey, ANY l_partkey, ANY $f1): rowcount = 
> 2.948545E9, cumulative cost = \{7.253787630218E10 rows, 
> 6.2279793457277E11 cpu, 3.0599948545E10 io, 1.083019457314816E14 network, 
> 1.17294998955024E11 memory}, id = 489487
> 01-04 Filter(condition=[<($0, *(0.2, $4))]) : rowType = RecordType(ANY 
> l_quantity, ANY l_extendedprice, ANY p_partkey, ANY l_partkey, ANY $f1): 
> rowcount = 2.948545E9, cumulative cost = \{6.953788144768E10 rows, 
> 6.1979793971827E11 cpu, 3.0599948545E10 io, 1.083019457314816E14 network, 
> 1.17294998955024E11 memory}, id = 489486
> 01-05 HashJoin(condition=[=($2, $3)], joinType=[inner], semi-join: =[false]) 
> : rowType = RecordType(ANY l_quantity, ANY l_extendedprice, ANY p_partkey, 
> ANY l_partkey, ANY $f1): rowcount = 5.89709E9, cumulative cost = 
> \{6.353789173867999E10 rows, 5.8379800146427E11 cpu, 3.0599948545E10 io, 
> 1.083019457314816E14 network, 1.17294998955024E11 memory}, id = 489485
> 01-07 Project(l_quantity=[$0], l_extendedprice=[$1], p_partkey=[$2]) : 
> rowType = RecordType(ANY l_quantity, ANY l_extendedprice, ANY p_partkey): 
> rowcount = 5.89709E9, cumulative cost = \{4.2417927963E10 rows, 
> 2.71618536905E11 cpu, 1.8599969127E10 io, 9.8471562592256E13 network, 7.92E7 
> memory}, id = 489476
> 01-09 HashToRandomExchange(dist0=[[$2]]) : rowType = RecordType(ANY 
> l_quantity, ANY l_extendedprice, ANY p_partkey, ANY 
> E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 5.89709E9, cumulative cost = 
> \{3.6417938254E10 rows, 2.53618567778E11 cpu, 1.8599969127E10 io, 
> 9.8471562592256E13 network, 7.92E7 memory}, id = 489475
> 02-01 UnorderedMuxExchange : rowType = RecordType(ANY l_quantity, ANY 
> 

[jira] [Updated] (DRILL-7148) TPCH query 17 increases execution time with Statistics enabled because join order is changed

2019-05-03 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7148:
-
Labels:   (was: ready-to-commit)

> TPCH query 17 increases execution time with Statistics enabled because join 
> order is changed
> 
>
> Key: DRILL-7148
> URL: https://issues.apache.org/jira/browse/DRILL-7148
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Gautam Parai
>Assignee: Gautam Parai
>Priority: Major
> Fix For: 1.17.0
>
>
> TPCH query 17 with sf 1000 runs 45% slower. One issue is that the join order 
> has flipped the build side and the probe side in Major Fragment 01.
> Here is the query:
> select
>  sum(l.l_extendedprice) / 7.0 as avg_yearly
> from
>  lineitem l,
>  part p
> where
>  p.p_partkey = l.l_partkey
>  and p.p_brand = 'Brand#13'
>  and p.p_container = 'JUMBO CAN'
>  and l.l_quantity < (
>  select
>  0.2 * avg(l2.l_quantity)
>  from
>  lineitem l2
>  where
>  l2.l_partkey = p.p_partkey
>  );
> Here is original plan:
> {noformat}
> 00-00 Screen : rowType = RecordType(ANY avg_yearly): rowcount = 1.0, 
> cumulative cost = \{7.853786601428E10 rows, 6.6179786770537E11 cpu, 
> 3.0599948545E10 io, 1.083019457355776E14 network, 1.17294998955024E11 
> memory}, id = 489493
> 00-01 Project(avg_yearly=[/($0, 7.0)]) : rowType = RecordType(ANY 
> avg_yearly): rowcount = 1.0, cumulative cost = \{7.853786601418E10 rows, 
> 6.6179786770527E11 cpu, 3.0599948545E10 io, 1.083019457355776E14 network, 
> 1.17294998955024E11 memory}, id = 489492
> 00-02 StreamAgg(group=[{}], agg#0=[SUM($0)]) : rowType = RecordType(ANY $f0): 
> rowcount = 1.0, cumulative cost = \{7.853786601318E10 rows, 
> 6.6179786770127E11 cpu, 3.0599948545E10 io, 1.083019457355776E14 network, 
> 1.17294998955024E11 memory}, id = 489491
> 00-03 UnionExchange : rowType = RecordType(ANY $f0): rowcount = 1.0, 
> cumulative cost = \{7.853786601218E10 rows, 6.6179786768927E11 cpu, 
> 3.0599948545E10 io, 1.083019457355776E14 network, 1.17294998955024E11 
> memory}, id = 489490
> 01-01 StreamAgg(group=[{}], agg#0=[SUM($0)]) : rowType = RecordType(ANY $f0): 
> rowcount = 1.0, cumulative cost = \{7.853786601118E10 rows, 
> 6.6179786768127E11 cpu, 3.0599948545E10 io, 1.083019457314816E14 network, 
> 1.17294998955024E11 memory}, id = 489489
> 01-02 Project(l_extendedprice=[$1]) : rowType = RecordType(ANY 
> l_extendedprice): rowcount = 2.948545E9, cumulative cost = 
> \{7.553787115668E10 rows, 6.2579792942727E11 cpu, 3.0599948545E10 io, 
> 1.083019457314816E14 network, 1.17294998955024E11 memory}, id = 489488
> 01-03 SelectionVectorRemover : rowType = RecordType(ANY l_quantity, ANY 
> l_extendedprice, ANY p_partkey, ANY l_partkey, ANY $f1): rowcount = 
> 2.948545E9, cumulative cost = \{7.253787630218E10 rows, 
> 6.2279793457277E11 cpu, 3.0599948545E10 io, 1.083019457314816E14 network, 
> 1.17294998955024E11 memory}, id = 489487
> 01-04 Filter(condition=[<($0, *(0.2, $4))]) : rowType = RecordType(ANY 
> l_quantity, ANY l_extendedprice, ANY p_partkey, ANY l_partkey, ANY $f1): 
> rowcount = 2.948545E9, cumulative cost = \{6.953788144768E10 rows, 
> 6.1979793971827E11 cpu, 3.0599948545E10 io, 1.083019457314816E14 network, 
> 1.17294998955024E11 memory}, id = 489486
> 01-05 HashJoin(condition=[=($2, $3)], joinType=[inner], semi-join: =[false]) 
> : rowType = RecordType(ANY l_quantity, ANY l_extendedprice, ANY p_partkey, 
> ANY l_partkey, ANY $f1): rowcount = 5.89709E9, cumulative cost = 
> \{6.353789173867999E10 rows, 5.8379800146427E11 cpu, 3.0599948545E10 io, 
> 1.083019457314816E14 network, 1.17294998955024E11 memory}, id = 489485
> 01-07 Project(l_quantity=[$0], l_extendedprice=[$1], p_partkey=[$2]) : 
> rowType = RecordType(ANY l_quantity, ANY l_extendedprice, ANY p_partkey): 
> rowcount = 5.89709E9, cumulative cost = \{4.2417927963E10 rows, 
> 2.71618536905E11 cpu, 1.8599969127E10 io, 9.8471562592256E13 network, 7.92E7 
> memory}, id = 489476
> 01-09 HashToRandomExchange(dist0=[[$2]]) : rowType = RecordType(ANY 
> l_quantity, ANY l_extendedprice, ANY p_partkey, ANY 
> E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 5.89709E9, cumulative cost = 
> \{3.6417938254E10 rows, 2.53618567778E11 cpu, 1.8599969127E10 io, 
> 9.8471562592256E13 network, 7.92E7 memory}, id = 489475
> 02-01 UnorderedMuxExchange : rowType = RecordType(ANY l_quantity, ANY 
> l_extendedprice, ANY p_partkey, ANY E_X_P_R_H_A_S_H_F_I_E_L_D): rowcount = 
> 5.89709E9, cumulative cost = \{3.0417948545E10 rows, 1.57618732434E11 
> cpu, 1.8599969127E10 io, 1.677312E11 network, 7.92E7 memory}, id = 489474
> 04-01 Project(l_quantity=[$0], l_extendedprice=[$1], p_partkey=[$2], 
> E_X_P_R_H_A_S_H_F_I_E_L_D=[hash32AsDouble($2, 1301011)]) : rowType 

[jira] [Assigned] (DRILL-416) Make Drill work with SELECT without FROM

2019-05-02 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia reassigned DRILL-416:
---

Assignee: Volodymyr Vysotskyi

> Make Drill work with SELECT without FROM
> 
>
> Key: DRILL-416
> URL: https://issues.apache.org/jira/browse/DRILL-416
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Chun Chang
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.16.0
>
>
> This works with postgres:
> [root@qa-node120 ~]# sudo -u postgres psql foodmart
> foodmart=# select 1+1.1;
>  ?column?
> --
>   2.1
> (1 row)
> But does not work with Drill:
> 0: jdbc:drill:> select 1+1.1;
> Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while 
> running query.[error_id: "100f4d4c-1ee1-495e-9c2f-547aae75473d"
> endpoint {
>   address: "qa-node118.qa.lab"
>   user_port: 31010
>   control_port: 31011
>   data_port: 31012
> }
> error_type: 0
> message: "Failure while parsing sql. < SqlParseException:[ Encountered 
> \"\" at line 1, column 12.\nWas expecting one of:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-912) Project push down tests rely on JSON pushdown but JSON reader no longer supports pushdown.

2019-05-02 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia reassigned DRILL-912:
---

Assignee: Volodymyr Vysotskyi

> Project push down tests rely on JSON pushdown but JSON reader no longer 
> supports pushdown.
> --
>
> Key: DRILL-912
> URL: https://issues.apache.org/jira/browse/DRILL-912
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Query Planning  Optimization
>Reporter: Jacques Nadeau
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.16.0
>
>
> We need to either add back pushdown or update the tests so that they use a 
> reader that supports pushdown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7229) Add scripts to the release folder in Drill Repo

2019-04-30 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7229:


 Summary: Add scripts to the release folder in Drill Repo
 Key: DRILL-7229
 URL: https://issues.apache.org/jira/browse/DRILL-7229
 Project: Apache Drill
  Issue Type: Sub-task
  Components: Tools, Build  Test
Reporter: Sorabh Hamirwasia
Assignee: Parth Chandra
 Fix For: 1.17.0


Move the release automation script into Drill repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7230) Add README.md with instructions for release

2019-04-30 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7230:


 Summary: Add README.md with instructions for release
 Key: DRILL-7230
 URL: https://issues.apache.org/jira/browse/DRILL-7230
 Project: Apache Drill
  Issue Type: Sub-task
  Components: Tools, Build  Test
Reporter: Sorabh Hamirwasia
 Fix For: 1.17.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7226) Compilation error on Windows when building from the release tarball sources

2019-04-30 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830543#comment-16830543
 ] 

Sorabh Hamirwasia commented on DRILL-7226:
--

[~denysord88] - Was the build tried with ApacheRelease profile as well ?

> Compilation error on Windows when building from the release tarball sources
> ---
>
> Key: DRILL-7226
> URL: https://issues.apache.org/jira/browse/DRILL-7226
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Denys Ordynskiy
>Assignee: Kunal Khatua
>Priority: Major
> Attachments: tarball_building.log
>
>
> *Description:*
>  OS - Windows.
>  Downloaded tarball with sources for the 
> [1.15|http://home.apache.org/~vitalii/drill/releases/1.15.0/rc2/apache-drill-1.15.0-src.tar.gz]
>  or 
> [1.16|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc2/apache-drill-1.16.0-src.tar.gz]
>  Drill release.
>  Extracted the sources.
>  Built sources using the following command:
> {noformat}
> mvn clean install -DskipTests -Pmapr
> {noformat}
> *Expected result:*
>  BUILD SUCCESS
> *Actual result:*
> {noformat}
> ...
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> D:\src\rc2\apache-drill-1.16.0-src\protocol\src\main\java\org\apache\drill\exec\proto\beans\RecordBatchDef.java:[53,17]
>  error: cannot find symbol
>   symbol:   class SerializedField
>   location: class RecordBatchDef
> ...
> BUILD FAILURE
> {noformat}
> See "tarball_building.log"
> There are no errors when building sources on Windows from the GitHub release 
> [branch|https://github.com/sohami/drill/commits/drill-1.16.0].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7221) Exclude debug files generated my maven debug option from jar

2019-04-26 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7221:


 Summary: Exclude debug files generated my maven debug option from 
jar
 Key: DRILL-7221
 URL: https://issues.apache.org/jira/browse/DRILL-7221
 Project: Apache Drill
  Issue Type: Sub-task
  Components: Tools, Build  Test
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.17.0


Release automated script was using -X debug option at release:prepare phase. 
This was generating some debug files which were getting packaged in the jars. 
This is because the pattern of these debug files were not ignored in exclude 
configuration of maven-jar plugin. It would be good to ignore these.

*Debug files which were included:*

*javac.sh*
*org.codehaus.plexus.compiler.javac.JavacCompiler1256088670033285178arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler1458111453480208588arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler2392560589194600493arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler4475905192586529595arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler4524532450095901144arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler4670895443631397937arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler5215058338087807885arguments*
*org.codehaus.plexus.compiler.javac.JavacCompiler7526103232425779297arguments*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7220) Create a release package in Drill repo with automated scripts and instructions

2019-04-26 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7220:


 Summary: Create a release package in Drill repo with automated 
scripts and instructions
 Key: DRILL-7220
 URL: https://issues.apache.org/jira/browse/DRILL-7220
 Project: Apache Drill
  Issue Type: Task
  Components: Tools, Build  Test
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.17.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7207) Update the copyright year in NOTICE.txt file

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7207:
-
Component/s: Tools, Build & Test

> Update the copyright year in NOTICE.txt file
> 
>
> Key: DRILL-7207
> URL: https://issues.apache.org/jira/browse/DRILL-7207
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Copyright year in NOTICE.txt file is until 2018, we should update it to 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7207) Update the copyright year in NOTICE.txt file

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7207:
-
Labels: ready-to-commit  (was: )

> Update the copyright year in NOTICE.txt file
> 
>
> Key: DRILL-7207
> URL: https://issues.apache.org/jira/browse/DRILL-7207
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Copyright year in NOTICE.txt file is until 2018, we should update it to 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7212) Add gpg key with apache.org email for sorabh

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7212:
-
Labels: ready-to-commit  (was: )

> Add gpg key with apache.org email for sorabh
> 
>
> Key: DRILL-7212
> URL: https://issues.apache.org/jira/browse/DRILL-7212
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7202) Failed query shows warning that fragments has made no progress

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7202:
-
Component/s: Web Server

> Failed query shows warning that fragments has made no progress
> --
>
> Key: DRILL-7202
> URL: https://issues.apache.org/jira/browse/DRILL-7202
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
> Attachments: FailedQuery_NoProgressWarning_Repro_Attempt.png, 
> no_fragments_progress_warning.JPG
>
>
> Failed query shows warning that fragments has made no progress.
> Since query failed during planning stage and did not have any fragments, 
> looks strange to see such warning. Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7212) Add gpg key with apache.org email for sorabh

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7212:
-
Component/s: Tools, Build & Test

> Add gpg key with apache.org email for sorabh
> 
>
> Key: DRILL-7212
> URL: https://issues.apache.org/jira/browse/DRILL-7212
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7212) Add gpg key with apache.org email for sorabh

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7212:
-
Reviewer: Boaz Ben-Zvi

> Add gpg key with apache.org email for sorabh
> 
>
> Key: DRILL-7212
> URL: https://issues.apache.org/jira/browse/DRILL-7212
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7207) Update the copyright year in NOTICE.txt file

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7207:
-
Reviewer: Boaz Ben-Zvi

> Update the copyright year in NOTICE.txt file
> 
>
> Key: DRILL-7207
> URL: https://issues.apache.org/jira/browse/DRILL-7207
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.16.0
>
>
> Copyright year in NOTICE.txt file is until 2018, we should update it to 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7213) drill-format-mapr.jar contains stale git.properties file

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7213:
-
Labels: ready-to-commit  (was: )

> drill-format-mapr.jar contains stale git.properties file
> 
>
> Key: DRILL-7213
> URL: https://issues.apache.org/jira/browse/DRILL-7213
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build  Test
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> For some reason only drill-format-mapr jar in the release candidate tarball 
> contains a stale git.properties file which is available during the prepare 
> phase. Other format plugin jar seems fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7201) Strange symbols in error window (Windows)

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7201:
-
Labels: ready-to-commit  (was: )

> Strange symbols in error window (Windows)
> -
>
> Key: DRILL-7201
> URL: https://issues.apache.org/jira/browse/DRILL-7201
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
> Attachments: error_window.JPG, error_with_symbols.png, 
> image-2019-04-24-10-22-30-830.png, inspect-element-font.png
>
>
> Error window contains strange symbols on Windows, works fine on other OS. 
> Before we had alert instead which did not have such issue.
> Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7201) Strange symbols in error window (Windows)

2019-04-25 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7201:
-
Component/s: Web Server

> Strange symbols in error window (Windows)
> -
>
> Key: DRILL-7201
> URL: https://issues.apache.org/jira/browse/DRILL-7201
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Arina Ielchiieva
>Assignee: Kunal Khatua
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
> Attachments: error_window.JPG, error_with_symbols.png, 
> image-2019-04-24-10-22-30-830.png, inspect-element-font.png
>
>
> Error window contains strange symbols on Windows, works fine on other OS. 
> Before we had alert instead which did not have such issue.
> Screenshot attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7213) drill-format-mapr.jar contains stale git.properties file

2019-04-24 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7213:


 Summary: drill-format-mapr.jar contains stale git.properties file
 Key: DRILL-7213
 URL: https://issues.apache.org/jira/browse/DRILL-7213
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build  Test
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.16.0


For some reason only drill-format-mapr jar in the release candidate tarball 
contains a stale git.properties file which is available during the prepare 
phase. Other format plugin jar seems fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-7208) Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.

2019-04-24 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825559#comment-16825559
 ] 

Sorabh Hamirwasia edited comment on DRILL-7208 at 4/24/19 10:53 PM:


I tried the same thing for 1.15-src tarball and it has same issues. So I don't 
think this is release blocker anymore. Marking the Jira for 1.17

 

[root@qa-node161 bin]# ./sqlline -u "jdbc:drill:drillbit=localhost"
 Apache Drill 1.15.0
 "Keep your data close, but your Drillbits closer."
 0: jdbc:drill:drillbit=localhost> SELECT * FROM sys.version;
 
+---+---++---++---+
|version|commit_id|commit_message|commit_time|build_email|build_time|

+---+---++---++---+
|1.15.0|Unknown| | |Unknown| |

+---+---++---++---+
 1 row selected (0.403 seconds)


was (Author: shamirwasia):
I tried the same thing for 1.15-src tarball and it has same issues. So I don't 
think this is release blocker anymore.

 

[root@qa-node161 bin]# ./sqlline -u "jdbc:drill:drillbit=localhost"
Apache Drill 1.15.0
"Keep your data close, but your Drillbits closer."
0: jdbc:drill:drillbit=localhost> SELECT * FROM sys.version;
+--++-+--+--+-+
| version | commit_id | commit_message | commit_time | build_email | build_time 
|
+--++-+--+--+-+
| 1.15.0 | Unknown | | | Unknown | |
+--++-+--+--+-+
1 row selected (0.403 seconds)

> Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.
> -
>
> Key: DRILL-7208
> URL: https://issues.apache.org/jira/browse/DRILL-7208
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: 1.16.0
>
>
> *Steps:*
>  # Download the rc1 sources tarball:
>  
> [apache-drill-1.16.0-src.tar.gz|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc1/apache-drill-1.16.0-src.tar.gz]
>  # Unpack
>  # Build:
> {noformat}
> mvn clean install -DskipTests
> {noformat}
>  # Start Drill in embedded mode:
> {noformat}
> Linux:
> distribution/target/apache-drill-1.16.0/apache-drill-1.16.0/bin/drill-embedded
> Windows:
> distribution\target\apache-drill-1.16.0\apache-drill-1.16.0\bin\sqlline.bat 
> -u "jdbc:drill:zk=local"
> {noformat}
>  # Run the query:
> {code:sql}
> select * from sys.version;
> {code}
> *Expected result:*
>  Drill version, commit_id, commit_message, commit_time, build_email, 
> build_time should be correctly displayed.
> *Actual result:*
> {noformat}
> apache drill> select * from sys.version;
> +-+---++-+-++
> | version | commit_id | commit_message | commit_time | build_email | 
> build_time |
> +-+---++-+-++
> | 1.16.0  | Unknown   || | Unknown |  
>   |
> +-+---++-+-++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7208) Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.

2019-04-24 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7208:
-
Affects Version/s: 1.15.0

> Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.
> -
>
> Key: DRILL-7208
> URL: https://issues.apache.org/jira/browse/DRILL-7208
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: 1.17.0
>
>
> *Steps:*
>  # Download the rc1 sources tarball:
>  
> [apache-drill-1.16.0-src.tar.gz|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc1/apache-drill-1.16.0-src.tar.gz]
>  # Unpack
>  # Build:
> {noformat}
> mvn clean install -DskipTests
> {noformat}
>  # Start Drill in embedded mode:
> {noformat}
> Linux:
> distribution/target/apache-drill-1.16.0/apache-drill-1.16.0/bin/drill-embedded
> Windows:
> distribution\target\apache-drill-1.16.0\apache-drill-1.16.0\bin\sqlline.bat 
> -u "jdbc:drill:zk=local"
> {noformat}
>  # Run the query:
> {code:sql}
> select * from sys.version;
> {code}
> *Expected result:*
>  Drill version, commit_id, commit_message, commit_time, build_email, 
> build_time should be correctly displayed.
> *Actual result:*
> {noformat}
> apache drill> select * from sys.version;
> +-+---++-+-++
> | version | commit_id | commit_message | commit_time | build_email | 
> build_time |
> +-+---++-+-++
> | 1.16.0  | Unknown   || | Unknown |  
>   |
> +-+---++-+-++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7208) Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.

2019-04-24 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825559#comment-16825559
 ] 

Sorabh Hamirwasia commented on DRILL-7208:
--

I tried the same thing for 1.15-src tarball and it has same issues. So I don't 
think this is release blocker anymore.

 

[root@qa-node161 bin]# ./sqlline -u "jdbc:drill:drillbit=localhost"
Apache Drill 1.15.0
"Keep your data close, but your Drillbits closer."
0: jdbc:drill:drillbit=localhost> SELECT * FROM sys.version;
+--++-+--+--+-+
| version | commit_id | commit_message | commit_time | build_email | build_time 
|
+--++-+--+--+-+
| 1.15.0 | Unknown | | | Unknown | |
+--++-+--+--+-+
1 row selected (0.403 seconds)

> Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.
> -
>
> Key: DRILL-7208
> URL: https://issues.apache.org/jira/browse/DRILL-7208
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: 1.16.0
>
>
> *Steps:*
>  # Download the rc1 sources tarball:
>  
> [apache-drill-1.16.0-src.tar.gz|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc1/apache-drill-1.16.0-src.tar.gz]
>  # Unpack
>  # Build:
> {noformat}
> mvn clean install -DskipTests
> {noformat}
>  # Start Drill in embedded mode:
> {noformat}
> Linux:
> distribution/target/apache-drill-1.16.0/apache-drill-1.16.0/bin/drill-embedded
> Windows:
> distribution\target\apache-drill-1.16.0\apache-drill-1.16.0\bin\sqlline.bat 
> -u "jdbc:drill:zk=local"
> {noformat}
>  # Run the query:
> {code:sql}
> select * from sys.version;
> {code}
> *Expected result:*
>  Drill version, commit_id, commit_message, commit_time, build_email, 
> build_time should be correctly displayed.
> *Actual result:*
> {noformat}
> apache drill> select * from sys.version;
> +-+---++-+-++
> | version | commit_id | commit_message | commit_time | build_email | 
> build_time |
> +-+---++-+-++
> | 1.16.0  | Unknown   || | Unknown |  
>   |
> +-+---++-+-++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7208) Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.

2019-04-24 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7208:
-
Fix Version/s: (was: 1.16.0)
   1.17.0

> Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.
> -
>
> Key: DRILL-7208
> URL: https://issues.apache.org/jira/browse/DRILL-7208
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: 1.17.0
>
>
> *Steps:*
>  # Download the rc1 sources tarball:
>  
> [apache-drill-1.16.0-src.tar.gz|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc1/apache-drill-1.16.0-src.tar.gz]
>  # Unpack
>  # Build:
> {noformat}
> mvn clean install -DskipTests
> {noformat}
>  # Start Drill in embedded mode:
> {noformat}
> Linux:
> distribution/target/apache-drill-1.16.0/apache-drill-1.16.0/bin/drill-embedded
> Windows:
> distribution\target\apache-drill-1.16.0\apache-drill-1.16.0\bin\sqlline.bat 
> -u "jdbc:drill:zk=local"
> {noformat}
>  # Run the query:
> {code:sql}
> select * from sys.version;
> {code}
> *Expected result:*
>  Drill version, commit_id, commit_message, commit_time, build_email, 
> build_time should be correctly displayed.
> *Actual result:*
> {noformat}
> apache drill> select * from sys.version;
> +-+---++-+-++
> | version | commit_id | commit_message | commit_time | build_email | 
> build_time |
> +-+---++-+-++
> | 1.16.0  | Unknown   || | Unknown |  
>   |
> +-+---++-+-++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7212) Add gpg key with apache.org email for sorabh

2019-04-24 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7212:


 Summary: Add gpg key with apache.org email for sorabh
 Key: DRILL-7212
 URL: https://issues.apache.org/jira/browse/DRILL-7212
 Project: Apache Drill
  Issue Type: Task
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.16.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7208) Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.

2019-04-24 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7208:
-
Fix Version/s: 1.16.0

> Drill commit is no showed if build Drill from the 1.16.0-rc1 release sources.
> -
>
> Key: DRILL-7208
> URL: https://issues.apache.org/jira/browse/DRILL-7208
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: 1.16.0
>
>
> *Steps:*
>  # Download the rc1 sources tarball:
>  
> [apache-drill-1.16.0-src.tar.gz|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc1/apache-drill-1.16.0-src.tar.gz]
>  # Unpack
>  # Build:
> {noformat}
> mvn clean install -DskipTests
> {noformat}
>  # Start Drill in embedded mode:
> {noformat}
> Linux:
> distribution/target/apache-drill-1.16.0/apache-drill-1.16.0/bin/drill-embedded
> Windows:
> distribution\target\apache-drill-1.16.0\apache-drill-1.16.0\bin\sqlline.bat 
> -u "jdbc:drill:zk=local"
> {noformat}
>  # Run the query:
> {code:sql}
> select * from sys.version;
> {code}
> *Expected result:*
>  Drill version, commit_id, commit_message, commit_time, build_email, 
> build_time should be correctly displayed.
> *Actual result:*
> {noformat}
> apache drill> select * from sys.version;
> +-+---++-+-++
> | version | commit_id | commit_message | commit_time | build_email | 
> build_time |
> +-+---++-+-++
> | 1.16.0  | Unknown   || | Unknown |  
>   |
> +-+---++-+-++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7207) Update the copyright year in NOTICE.txt file

2019-04-24 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7207:


 Summary: Update the copyright year in NOTICE.txt file
 Key: DRILL-7207
 URL: https://issues.apache.org/jira/browse/DRILL-7207
 Project: Apache Drill
  Issue Type: Task
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.16.0


Copyright year in NOTICE.txt file is until 2018, we should update it to 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7195) Query returns incorrect result or does not fail when cast with is null is used in filter condition

2019-04-24 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825306#comment-16825306
 ] 

Sorabh Hamirwasia commented on DRILL-7195:
--

Looks like I should have been more explicit. When I say regarding making a 
decision to upgrade Calcite version or not thats considering if any bugs in 
Calcite 1.19 is causing regressions in Drill, not just postponing the upgrade 
based on any bugs.

> Query returns incorrect result or does not fail when cast with is null is 
> used in filter condition
> --
>
> Key: DRILL-7195
> URL: https://issues.apache.org/jira/browse/DRILL-7195
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> 1. For the case when a query contains filter with a {{cast}} which cannot be 
> done with {{is null}}, the query does not fail:
> {code:sql}
> select * from dfs.tmp.`a.json` as t where cast(t.a as integer) is null;
> +---+
> | a |
> +---+
> +---+
> No rows selected (0.142 seconds)
> {code}
> where
> {noformat}
> cat /tmp/a.json
> {"a":"aaa"}
> {noformat}
> But for the case when this condition is specified in project, query, as it is 
> expected, fails:
> {code:sql}
> select cast(t.a as integer) is null from dfs.tmp.`a.json` t;
> Error: SYSTEM ERROR: NumberFormatException: aaa
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: ed3982ce-a12f-4d63-bc6e-cafddf28cc24 on user515050-pc:31010] 
> (state=,code=0)
> {code}
> This is a regression, for Drill 1.15 the first and the second queries are 
> failed:
> {code:sql}
> select * from dfs.tmp.`a.json` as t where cast(t.a as integer) is null;
> Error: SYSTEM ERROR: NumberFormatException: aaa
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: 2f878f15-ddaa-48cd-9dfb-45c04db39048 on user515050-pc:31010] 
> (state=,code=0)
> {code}
> 2. For the case when {{drill.exec.functions.cast_empty_string_to_null}} is 
> enabled, this issue will cause wrong results:
> {code:sql}
> alter system set `drill.exec.functions.cast_empty_string_to_null`=true;
> select * from dfs.tmp.`a1.json` t where cast(t.a as integer) is null;
> +---+
> | a |
> +---+
> +---+
> No rows selected (1.759 seconds)
> {code}
> where
> {noformat}
> cat /tmp/a1.json 
> {"a":"1"}
> {"a":""}
> {noformat}
> Result for Drill 1.15.0:
> {code:sql}
> select * from dfs.tmp.`a1.json` t where cast(t.a as integer) is null;
> ++
> | a  |
> ++
> ||
> ++
> 1 row selected (1.724 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7191) RM blobs persistence in Zookeeper for Distributed RM

2019-04-23 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7191:
-
Description: 
Changes to support storing UUID for each Drillbit Service Instance locally to 
be used by planner and execution layer. This UUID is used to uniquely identify 
a Drillbit and register Drillbit information in the RM StateBlobs.

Introduced a PersistentStore named ZookeeperTransactionalPersistenceStore with 
Transactional capabilities using Zookeeper Transactional API’s. This is used 
for updating RM State blobs as all the updates need to happen in transactional 
manner. Added RMStateBlobs definition and support for serde to Zookeeper.

Implementation for DistributedRM and its corresponding QueryRM apis and state 
management.

Updated the state management of Query in Foreman so that same Foreman object 
can be submitted multiple times. Also introduced concept of 2 maps keeping 
track of waiting and running queries. These were done to support for async 
admit protocol which will be needed with Distributed RM.

  was:
Selection of the queue based on the acl/tags
Non-leader queue configurations
All required blobs for the queues in Zookeeper.
Concept of waiting queues and running queues on Foreman
Handling state transition of queryRM
Changes to support storing UUID for each Drillbit Service Instance locally to 
be used by planner and execution layer. This UUID is used to uniquely identify 
a Drillbit and register Drillbit information in the RM StateBlobs. Introduced a 
PersistentStore named ZookeeperTransactionalPersistenceStore with Transactional 
capabilities using Zookeeper Transactional API’s. This is used for updating RM 
State blobs as all the updates need to happen in transactional manner. Added 
RMStateBlobs definition and support for serde to Zookeeper. Implementation for 
DistributedRM and its corresponding QueryRM apis.
Updated the state management of Query in Foreman so that same Foreman object 
can be submitted multiple times. Also introduced concept of 2 maps keeping 
track of waiting and running queries. These were done to support for async 
admit protocol which will be needed with Distributed RM.


> RM blobs persistence in Zookeeper for Distributed RM
> 
>
> Key: DRILL-7191
> URL: https://issues.apache.org/jira/browse/DRILL-7191
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components:  Server, Query Planning  Optimization
>Affects Versions: 1.17.0
>Reporter: Hanumath Rao Maduri
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.17.0
>
>
> Changes to support storing UUID for each Drillbit Service Instance locally to 
> be used by planner and execution layer. This UUID is used to uniquely 
> identify a Drillbit and register Drillbit information in the RM StateBlobs.
> Introduced a PersistentStore named ZookeeperTransactionalPersistenceStore 
> with Transactional capabilities using Zookeeper Transactional API’s. This is 
> used for updating RM State blobs as all the updates need to happen in 
> transactional manner. Added RMStateBlobs definition and support for serde to 
> Zookeeper.
> Implementation for DistributedRM and its corresponding QueryRM apis and state 
> management.
> Updated the state management of Query in Foreman so that same Foreman object 
> can be submitted multiple times. Also introduced concept of 2 maps keeping 
> track of waiting and running queries. These were done to support for async 
> admit protocol which will be needed with Distributed RM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-7191) RM blobs persistence in Zookeeper for Distributed RM

2019-04-23 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia reassigned DRILL-7191:


Assignee: Sorabh Hamirwasia

> RM blobs persistence in Zookeeper for Distributed RM
> 
>
> Key: DRILL-7191
> URL: https://issues.apache.org/jira/browse/DRILL-7191
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components:  Server, Query Planning  Optimization
>Affects Versions: 1.17.0
>Reporter: Hanumath Rao Maduri
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.17.0
>
>
> Selection of the queue based on the acl/tags
> Non-leader queue configurations
> All required blobs for the queues in Zookeeper.
> Concept of waiting queues and running queues on Foreman
> Handling state transition of queryRM
> Changes to support storing UUID for each Drillbit Service Instance locally to 
> be used by planner and execution layer. This UUID is used to uniquely 
> identify a Drillbit and register Drillbit information in the RM StateBlobs. 
> Introduced a PersistentStore named ZookeeperTransactionalPersistenceStore 
> with Transactional capabilities using Zookeeper Transactional API’s. This is 
> used for updating RM State blobs as all the updates need to happen in 
> transactional manner. Added RMStateBlobs definition and support for serde to 
> Zookeeper. Implementation for DistributedRM and its corresponding QueryRM 
> apis.
> Updated the state management of Query in Foreman so that same Foreman object 
> can be submitted multiple times. Also introduced concept of 2 maps keeping 
> track of waiting and running queries. These were done to support for async 
> admit protocol which will be needed with Distributed RM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7195) Query returns incorrect result or does not fail when cast with is null is used in filter condition

2019-04-23 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824314#comment-16824314
 ] 

Sorabh Hamirwasia commented on DRILL-7195:
--

Since the issue is on Calcite side the correct way to fix will be to get from 
Calcite itself. Looks like there are [multiple issues reported on 1.18 which 
were fixed in 
1.19|https://issues.apache.org/jira/issues/?jql=project%20%3D%20CALCITE%20AND%20issuetype%20%3D%20Bug%20AND%20affectedVersion%20%3D%201.18.0%20AND%20fixVersion%20%3D%201.19.0%20ORDER%20BY%20created%20DESC]
 Calcite. If we are planning an upgrade to 1.19 in Drill 1.17 timeframe, it 
would be helpful to track such bugs in 1.19 branch and make an early decision 
whether to upgrade to 1.19 or not. 

Thanks for discussing and resolving this issue. [~vvysotskyi] - Can you please 
work with [~bbevens] to document the limitation

> Query returns incorrect result or does not fail when cast with is null is 
> used in filter condition
> --
>
> Key: DRILL-7195
> URL: https://issues.apache.org/jira/browse/DRILL-7195
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> 1. For the case when a query contains filter with a {{cast}} which cannot be 
> done with {{is null}}, the query does not fail:
> {code:sql}
> select * from dfs.tmp.`a.json` as t where cast(t.a as integer) is null;
> +---+
> | a |
> +---+
> +---+
> No rows selected (0.142 seconds)
> {code}
> where
> {noformat}
> cat /tmp/a.json
> {"a":"aaa"}
> {noformat}
> But for the case when this condition is specified in project, query, as it is 
> expected, fails:
> {code:sql}
> select cast(t.a as integer) is null from dfs.tmp.`a.json` t;
> Error: SYSTEM ERROR: NumberFormatException: aaa
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: ed3982ce-a12f-4d63-bc6e-cafddf28cc24 on user515050-pc:31010] 
> (state=,code=0)
> {code}
> This is a regression, for Drill 1.15 the first and the second queries are 
> failed:
> {code:sql}
> select * from dfs.tmp.`a.json` as t where cast(t.a as integer) is null;
> Error: SYSTEM ERROR: NumberFormatException: aaa
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: 2f878f15-ddaa-48cd-9dfb-45c04db39048 on user515050-pc:31010] 
> (state=,code=0)
> {code}
> 2. For the case when {{drill.exec.functions.cast_empty_string_to_null}} is 
> enabled, this issue will cause wrong results:
> {code:sql}
> alter system set `drill.exec.functions.cast_empty_string_to_null`=true;
> select * from dfs.tmp.`a1.json` t where cast(t.a as integer) is null;
> +---+
> | a |
> +---+
> +---+
> No rows selected (1.759 seconds)
> {code}
> where
> {noformat}
> cat /tmp/a1.json 
> {"a":"1"}
> {"a":""}
> {noformat}
> Result for Drill 1.15.0:
> {code:sql}
> select * from dfs.tmp.`a1.json` t where cast(t.a as integer) is null;
> ++
> | a  |
> ++
> ||
> ++
> 1 row selected (1.724 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7190) Missing backward compatibility for REST API with DRILL-6562

2019-04-21 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7190:
-
Labels: ready-to-commit  (was: )

> Missing backward compatibility for REST API with DRILL-6562
> ---
>
> Key: DRILL-7190
> URL: https://issues.apache.org/jira/browse/DRILL-7190
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Vitalii Diravka
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> With DRILL-6562 I am seeing additional changes which is not supporting older 
> requests URL. For example:
> 1) Earlier export of plugin config was done for json format by default using 
> URL:  */storage/\{name}/export* and now the new URL is 
> */storage/\{name}/export/\{format}*. This means the older one is not 
> supported anymore. Is it intended or should we treat the format as JSON by 
> default if not provided ?
> 2) The POST URL to create and update plugin is changed from 
> */storage/\{name}* to */storage/create_update*
> 3) Once a storage plugin is deleted it is not redirected to */storage* 
> anymore, which it was in 1.15. Because of this line change: 
> [https://github.com/apache/drill/commit/5fff1d8bff899e1af551c16f26a58b6b1d033ffb#diff-274673e64e6f54be595a8703753123b0R115]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-7164) KafkaFilterPushdownTest is sometimes failing to pattern match correctly.

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia reassigned DRILL-7164:


Assignee: Sorabh Hamirwasia  (was: Abhishek Ravi)

> KafkaFilterPushdownTest is sometimes failing to pattern match correctly.
> 
>
> Key: DRILL-7164
> URL: https://issues.apache.org/jira/browse/DRILL-7164
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Kafka
>Affects Versions: 1.16.0
>Reporter: Hanumath Rao Maduri
>Assignee: Sorabh Hamirwasia
>Priority: Major
> Fix For: 1.17.0
>
>
> On my private build I am hitting kafka storage tests issue intermittently. 
> Here is the issue which I came across.
> {code}
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]
> 15:01:39.852 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 
> -292 B(75.4 KiB), h: -391.1 MiB(240.7 MiB), nh: 824.5 KiB(129.0 MiB)): 
> testPushdownOffsetOneRecordReturnedWithBoundaryConditions(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
> java.lang.AssertionError: Unable to find expected string "kafkaScanSpec" 
> : {
>   "topicName" : "drill-pushdown-topic"
> },
> "cost" in plan: {
>   "head" : {
> "version" : 1,
> "generator" : {
>   "type" : "ExplainHandler",
>   "info" : ""
> },
> "type" : "APACHE_DRILL_PHYSICAL",
> "options" : [ {
>   "kind" : "STRING",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.record.reader",
>   "string_val" : 
> "org.apache.drill.exec.store.kafka.decoders.JsonMessageReader",
>   "scope" : "SESSION"
> }, {
>   "kind" : "BOOLEAN",
>   "accessibleScopes" : "ALL",
>   "name" : "exec.errors.verbose",
>   "bool_val" : true,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "store.kafka.poll.timeout",
>   "num_val" : 5000,
>   "scope" : "SESSION"
> }, {
>   "kind" : "LONG",
>   "accessibleScopes" : "ALL",
>   "name" : "planner.width.max_per_node",
>   "num_val" : 2,
>   "scope" : "SESSION"
> } ],
> "queue" : 0,
> "hasResourcePlan" : false,
> "resultMode" : "EXEC"
>   },
>   "graph" : [ {
> "pop" : "kafka-scan",
> "@id" : 6,
> "userName" : "",
> "kafkaStoragePluginConfig" : {
>   "type" : "kafka",
>   "kafkaConsumerProps" : {
> "bootstrap.servers" : "127.0.0.1:56524",
> "group.id" : "drill-test-consumer"
>   },
>   "enabled" : true
> },
> "columns" : [ "`**`", "`kafkaMsgOffset`" ],
> "kafkaScanSpec" : {
>   "topicName" : "drill-pushdown-topic"
> },
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 5.0
> }
>   }, {
> "pop" : "project",
> "@id" : 5,
> "exprs" : [ {
>   "ref" : "`T23¦¦**`",
>   "expr" : "`**`"
> }, {
>   "ref" : "`kafkaMsgOffset`",
>   "expr" : "`kafkaMsgOffset`"
> } ],
> "child" : 6,
> "outputProj" : false,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 5.0
> }
>   }, {
> "pop" : "filter",
> "@id" : 4,
> "child" : 5,
> "expr" : "equal(`kafkaMsgOffset`, 9) ",
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 0.75
> }
>   }, {
> "pop" : "selection-vector-remover",
> "@id" : 3,
> "child" : 4,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "project",
> "@id" : 2,
> "exprs" : [ {
>   "ref" : "`T23¦¦**`",
>   "expr" : "`T23¦¦**`"
> } ],
> "child" : 3,
> "outputProj" : false,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "project",
> "@id" : 1,
> "exprs" : [ {
>   "ref" : "`**`",
>   "expr" : "`T23¦¦**`"
> } ],
> "child" : 2,
> "outputProj" : true,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   }, {
> "pop" : "screen",
> "@id" : 0,
> "child" : 1,
> "initialAllocation" : 100,
> "maxAllocation" : 100,
> "cost" : {
>   "memoryCost" : 1.6777216E7,
>   "outputRowCount" : 1.0
> }
>   } ]
> }!
> {code}
> In 

[jira] [Updated] (DRILL-7186) Missing storage.json REST endpoint.

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7186:
-
Component/s: Web Server

> Missing storage.json REST endpoint.
> ---
>
> Key: DRILL-7186
> URL: https://issues.apache.org/jira/browse/DRILL-7186
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Arina Ielchiieva
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Steps:*
> 1. Open page: http://:8047/storage.json
> *Expected result:*
> storage.json is opened:
> {noformat}
> [ {
>   "name" : "cp",
>   "config" : {
> "type" : "file",
> "connection" : "classpath:///",
> "config" : null,
> "workspaces" : { },
> "formats" : {
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "dfs",
>   "config" : {
> "type" : "file",
> "connection" : "file:///",
> "config" : null,
> "workspaces" : {
>   "tmp" : {
> "location" : "/tmp",
> "writable" : true,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   },
>   "root" : {
> "location" : "/",
> "writable" : false,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   }
> },
> "formats" : {
>   "psv" : {
> "type" : "text",
> "extensions" : [ "tbl" ],
> "delimiter" : "|"
>   },
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "httpd" : {
> "type" : "httpd",
> "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "pcap" : {
> "type" : "pcap"
>   },
>   "pcapng" : {
> "type" : "pcapng",
> "extensions" : [ "pcapng" ]
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "sequencefile" : {
> "type" : "sequencefile",
> "extensions" : [ "seq" ]
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "hbase",
>   "config" : {
> "type" : "hbase",
> "config" : {
>   "hbase.zookeeper.quorum" : "localhost",
>   "hbase.zookeeper.property.clientPort" : "2181"
> },
> "size.calculator.enabled" : false,
> "enabled" : false
>   }
> }, {
>   "name" : "hive",
>   "config" : {
> "type" : "hive",
> "configProps" : {
>   "hive.metastore.uris" : "",
>   "javax.jdo.option.ConnectionURL" : 
> "jdbc:derby:;databaseName=../sample-data/drill_hive_db;create=true",
>   "hive.metastore.warehouse.dir" : "/tmp/drill_hive_wh",
>   "fs.default.name" : "file:///",
>   "hive.metastore.sasl.enabled" : "false",
>   "hive.metastore.schema.verification" : "false",
>   "datanucleus.schema.autoCreateAll" : "true"
> },
> "enabled" : false
>   }
> }, {
>   "name" : "kafka",
>   "config" : {
> "type" : "kafka",
> "kafkaConsumerProps" : {

[jira] [Comment Edited] (DRILL-7186) Missing storage.json REST endpoint.

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822107#comment-16822107
 ] 

Sorabh Hamirwasia edited comment on DRILL-7186 at 4/19/19 6:20 PM:
---

[~vitalii] - Given the changes are small, I would say for better user 
experience let's provide those. So just to clarify you will open PR for all 3 
issues. 1 and 2 are related to backward compatibility and specially 3 is user 
experience which definitely should be fixed. Opened 
https://issues.apache.org/jira/browse/DRILL-7190 to track the fixes


was (Author: shamirwasia):
[~vitalii] - Given the changes are small, I would say for better user 
experience let's provide those. So just to clarify you will open PR for all 3 
issues. 1 and 2 are related to backward compatibility and specially 3 is user 
experience which definitely should be fixed.

> Missing storage.json REST endpoint.
> ---
>
> Key: DRILL-7186
> URL: https://issues.apache.org/jira/browse/DRILL-7186
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Arina Ielchiieva
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Steps:*
> 1. Open page: http://:8047/storage.json
> *Expected result:*
> storage.json is opened:
> {noformat}
> [ {
>   "name" : "cp",
>   "config" : {
> "type" : "file",
> "connection" : "classpath:///",
> "config" : null,
> "workspaces" : { },
> "formats" : {
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "dfs",
>   "config" : {
> "type" : "file",
> "connection" : "file:///",
> "config" : null,
> "workspaces" : {
>   "tmp" : {
> "location" : "/tmp",
> "writable" : true,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   },
>   "root" : {
> "location" : "/",
> "writable" : false,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   }
> },
> "formats" : {
>   "psv" : {
> "type" : "text",
> "extensions" : [ "tbl" ],
> "delimiter" : "|"
>   },
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "httpd" : {
> "type" : "httpd",
> "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "pcap" : {
> "type" : "pcap"
>   },
>   "pcapng" : {
> "type" : "pcapng",
> "extensions" : [ "pcapng" ]
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "sequencefile" : {
> "type" : "sequencefile",
> "extensions" : [ "seq" ]
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "hbase",
>   "config" : {
> "type" : "hbase",
> "config" : {
>   "hbase.zookeeper.quorum" : "localhost",
>   "hbase.zookeeper.property.clientPort" : "2181"
> },
> "size.calculator.enabled" : 

[jira] [Created] (DRILL-7190) Missing backward compatibility for REST API with DRILL-6562

2019-04-19 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7190:


 Summary: Missing backward compatibility for REST API with 
DRILL-6562
 Key: DRILL-7190
 URL: https://issues.apache.org/jira/browse/DRILL-7190
 Project: Apache Drill
  Issue Type: Bug
  Components: Web Server
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Vitalii Diravka
 Fix For: 1.16.0


With DRILL-6562 I am seeing additional changes which is not supporting older 
requests URL. For example:

1) Earlier export of plugin config was done for json format by default using 
URL:  */storage/\{name}/export* and now the new URL is 
*/storage/\{name}/export/\{format}*. This means the older one is not supported 
anymore. Is it intended or should we treat the format as JSON by default if not 
provided ?

2) The POST URL to create and update plugin is changed from */storage/\{name}* 
to */storage/create_update*

3) Once a storage plugin is deleted it is not redirected to */storage* anymore, 
which it was in 1.15. Because of this line change: 
[https://github.com/apache/drill/commit/5fff1d8bff899e1af551c16f26a58b6b1d033ffb#diff-274673e64e6f54be595a8703753123b0R115]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7186) Missing storage.json REST endpoint.

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822107#comment-16822107
 ] 

Sorabh Hamirwasia commented on DRILL-7186:
--

[~vitalii] - Given the changes are small, I would say for better user 
experience let's provide those. So just to clarify you will open PR for all 3 
issues. 1 and 2 are related to backward compatibility and specially 3 is user 
experience which definitely should be fixed.

> Missing storage.json REST endpoint.
> ---
>
> Key: DRILL-7186
> URL: https://issues.apache.org/jira/browse/DRILL-7186
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Arina Ielchiieva
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Steps:*
> 1. Open page: http://:8047/storage.json
> *Expected result:*
> storage.json is opened:
> {noformat}
> [ {
>   "name" : "cp",
>   "config" : {
> "type" : "file",
> "connection" : "classpath:///",
> "config" : null,
> "workspaces" : { },
> "formats" : {
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "dfs",
>   "config" : {
> "type" : "file",
> "connection" : "file:///",
> "config" : null,
> "workspaces" : {
>   "tmp" : {
> "location" : "/tmp",
> "writable" : true,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   },
>   "root" : {
> "location" : "/",
> "writable" : false,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   }
> },
> "formats" : {
>   "psv" : {
> "type" : "text",
> "extensions" : [ "tbl" ],
> "delimiter" : "|"
>   },
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "httpd" : {
> "type" : "httpd",
> "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "pcap" : {
> "type" : "pcap"
>   },
>   "pcapng" : {
> "type" : "pcapng",
> "extensions" : [ "pcapng" ]
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "sequencefile" : {
> "type" : "sequencefile",
> "extensions" : [ "seq" ]
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "hbase",
>   "config" : {
> "type" : "hbase",
> "config" : {
>   "hbase.zookeeper.quorum" : "localhost",
>   "hbase.zookeeper.property.clientPort" : "2181"
> },
> "size.calculator.enabled" : false,
> "enabled" : false
>   }
> }, {
>   "name" : "hive",
>   "config" : {
> "type" : "hive",
> "configProps" : {
>   "hive.metastore.uris" : "",
>   "javax.jdo.option.ConnectionURL" : 
> "jdbc:derby:;databaseName=../sample-data/drill_hive_db;create=true",
>   "hive.metastore.warehouse.dir" : "/tmp/drill_hive_wh",
>   "fs.default.name" : "file:///",
>   "hive.metastore.sasl.enabled" : 

[jira] [Updated] (DRILL-7105) Error while building the Drill native client

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7105:
-
Fix Version/s: (was: 1.16.0)

> Error while building the Drill native client
> 
>
> Key: DRILL-7105
> URL: https://issues.apache.org/jira/browse/DRILL-7105
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> *Steps:*
>  # cd contrib/native/client
>  # mkdir build
>  # cd build && cmake -std=c++11 -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug 
> ..
>  # make
> *Expected result:*
>  The native client is built successfully.
> *Actual result:*
>  Error happens:
> {noformat}
> [  4%] Built target y2038
> [  7%] Building CXX object 
> src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o
> In file included from /usr/include/c++/5/mutex:35:0,
>  from /usr/local/include/google/protobuf/stubs/mutex.h:33,
>  from /usr/local/include/google/protobuf/stubs/common.h:52,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file 
> requires compiler and library support for the ISO C++ 2011 standard. This 
> support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
>  #error This file requires compiler and library support \
>   ^
> In file included from /usr/local/include/google/protobuf/stubs/common.h:52:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/local/include/google/protobuf/stubs/mutex.h:58:8: error: 'mutex' in 
> namespace 'std' does not name a type
>std::mutex mu_;
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Lock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:51:17: error: 'mu_' was not 
> declared in this scope
>void Lock() { mu_.lock(); }
>  ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Unlock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:52:19: error: 'mu_' was not 
> declared in this scope
>void Unlock() { mu_.unlock(); }
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:61:7: error: expected 
> nested-name-specifier before 'Mutex'
>  using Mutex = WrappedMutex;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h:66:28: error: expected ')' 
> before '*' token
>explicit MutexLock(Mutex *mu) : mu_(mu) { this->mu_->Lock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h:69:3: error: 'Mutex' does 
> not name a type
>Mutex *const mu_;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: In destructor 
> 'google::protobuf::internal::MutexLock::~MutexLock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:67:24: error: 'class 
> google::protobuf::internal::MutexLock' has no member named 'mu_'
>~MutexLock() { this->mu_->Unlock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:80:33: error: expected ')' 
> before '*' token
>explicit MutexLockMaybe(Mutex *mu) :
>  ^
> In file included from /usr/local/include/google/protobuf/arena.h:48:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:23,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected unqualified-id before end 
> of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected declaration before end of 
> line
> src/protobuf/CMakeFiles/protomsgs.dir/build.make:62: recipe for target 
> 'src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o' failed
> make[2]: *** 

[jira] [Comment Edited] (DRILL-6642) Update protocol-buffers version

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821422#comment-16821422
 ] 

Sorabh Hamirwasia edited comment on DRILL-6642 at 4/19/19 5:47 PM:
---

This change is reverted into 1.16 version and tracked by DRILL-7188


was (Author: shamirwasia):
This change is reverted into 1.16 version

> Update protocol-buffers version
> ---
>
> Key: DRILL-6642
> URL: https://issues.apache.org/jira/browse/DRILL-6642
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Anton Gozhiy
>Priority: Major
> Fix For: 1.17.0
>
>
> Currently Drill uses 2.5.0 {{protocol-buffers}} version.
>  The last version is 3.6.0 in maven repo: 
> [https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java]
> The new version has a lot of useful enhancements, which can be used in Drill.
>  One of them is using {{UNRECOGNIZED Enum NullValue}}, which can help to 
> handle them in place of null values for {{ProtocolMessageEnum}} - DRILL-6639. 
>  Looks like the NullValue can be used instead of null returned from 
> {{valueOf()}} (_or {{forNumber()}}, since {{valueOf()}} is deprecated in the 
> newer protobuf version_):
>  
> [https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/NullValue]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-7105) Error while building the Drill native client

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821423#comment-16821423
 ] 

Sorabh Hamirwasia edited comment on DRILL-7105 at 4/19/19 5:47 PM:
---

This change is reverted in 1.16.0 branch and tracked by DRILL-7189


was (Author: shamirwasia):
This change is reverted in 1.16.0 branch

> Error while building the Drill native client
> 
>
> Key: DRILL-7105
> URL: https://issues.apache.org/jira/browse/DRILL-7105
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> *Steps:*
>  # cd contrib/native/client
>  # mkdir build
>  # cd build && cmake -std=c++11 -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug 
> ..
>  # make
> *Expected result:*
>  The native client is built successfully.
> *Actual result:*
>  Error happens:
> {noformat}
> [  4%] Built target y2038
> [  7%] Building CXX object 
> src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o
> In file included from /usr/include/c++/5/mutex:35:0,
>  from /usr/local/include/google/protobuf/stubs/mutex.h:33,
>  from /usr/local/include/google/protobuf/stubs/common.h:52,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file 
> requires compiler and library support for the ISO C++ 2011 standard. This 
> support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
>  #error This file requires compiler and library support \
>   ^
> In file included from /usr/local/include/google/protobuf/stubs/common.h:52:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/local/include/google/protobuf/stubs/mutex.h:58:8: error: 'mutex' in 
> namespace 'std' does not name a type
>std::mutex mu_;
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Lock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:51:17: error: 'mu_' was not 
> declared in this scope
>void Lock() { mu_.lock(); }
>  ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Unlock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:52:19: error: 'mu_' was not 
> declared in this scope
>void Unlock() { mu_.unlock(); }
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:61:7: error: expected 
> nested-name-specifier before 'Mutex'
>  using Mutex = WrappedMutex;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h:66:28: error: expected ')' 
> before '*' token
>explicit MutexLock(Mutex *mu) : mu_(mu) { this->mu_->Lock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h:69:3: error: 'Mutex' does 
> not name a type
>Mutex *const mu_;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: In destructor 
> 'google::protobuf::internal::MutexLock::~MutexLock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:67:24: error: 'class 
> google::protobuf::internal::MutexLock' has no member named 'mu_'
>~MutexLock() { this->mu_->Unlock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:80:33: error: expected ')' 
> before '*' token
>explicit MutexLockMaybe(Mutex *mu) :
>  ^
> In file included from /usr/local/include/google/protobuf/arena.h:48:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:23,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected unqualified-id before end 
> of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected declaration before end of 
> line
> 

[jira] [Updated] (DRILL-6642) Update protocol-buffers version

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6642:
-
Fix Version/s: (was: 1.16.0)

> Update protocol-buffers version
> ---
>
> Key: DRILL-6642
> URL: https://issues.apache.org/jira/browse/DRILL-6642
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Anton Gozhiy
>Priority: Major
> Fix For: 1.17.0
>
>
> Currently Drill uses 2.5.0 {{protocol-buffers}} version.
>  The last version is 3.6.0 in maven repo: 
> [https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java]
> The new version has a lot of useful enhancements, which can be used in Drill.
>  One of them is using {{UNRECOGNIZED Enum NullValue}}, which can help to 
> handle them in place of null values for {{ProtocolMessageEnum}} - DRILL-6639. 
>  Looks like the NullValue can be used instead of null returned from 
> {{valueOf()}} (_or {{forNumber()}}, since {{valueOf()}} is deprecated in the 
> newer protobuf version_):
>  
> [https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/NullValue]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7189) Revert DRILL-7105 Error while building the Drill native client

2019-04-19 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7189:


 Summary: Revert DRILL-7105 Error while building the Drill native 
client
 Key: DRILL-7189
 URL: https://issues.apache.org/jira/browse/DRILL-7189
 Project: Apache Drill
  Issue Type: Task
  Components: Client - C++
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.16.0


Revert the change in 1.16 branch only since protobuf upgrade change is also 
reverted using DRILL-7188



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7188) Revert DRILL-6642: Update protocol-buffers version

2019-04-19 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7188:


 Summary: Revert DRILL-6642: Update protocol-buffers version
 Key: DRILL-7188
 URL: https://issues.apache.org/jira/browse/DRILL-7188
 Project: Apache Drill
  Issue Type: Task
  Components: Tools, Build  Test
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Sorabh Hamirwasia
 Fix For: 1.16.0


Revert the protobuf changes for 1.16 branch only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7185:
-
Fix Version/s: (was: 1.17.0)
   1.16.0

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (DRILL-7186) Missing storage.json REST endpoint.

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822052#comment-16822052
 ] 

Sorabh Hamirwasia edited comment on DRILL-7186 at 4/19/19 5:02 PM:
---

@vdiravka / [~arina]

With DRILL-6562 I am seeing additional changes which is not supporting older 
requests URL. For example:

1) Earlier export of plugin config was done for json format by default using 
URL:  */storage/\{name}/export* and now the new URL is 
*/storage/\{name}/export/\{format}*. This means the older one is not supported 
anymore. Is it intended or should we treat the format as JSON by default if not 
provided ?

2) The POST URL to create and update plugin is changed from */storage/\{name}* 
to */storage/create_update*

3) Once a storage plugin is deleted it is not redirected to */storage* anymore, 
which it was in 1.15. Because of this line change: 
[https://github.com/apache/drill/commit/5fff1d8bff899e1af551c16f26a58b6b1d033ffb#diff-274673e64e6f54be595a8703753123b0R115]


was (Author: shamirwasia):
@vdiravka / [~arina]

With DRILL-6562 I am seeing additional changes which is not supporting older 
requests URL. For example:

1) Earlier export of plugin config was done for json format by default using 
URL:  */storage/\{name}/export* and now the new URL is 
*/storage/\{name}/export/\{format}*. This means the older one is not supported 
anymore. Is it intended or should we treat the format as JSON by default if not 
provided ?

2) The POST URL to create and update plugin is changed from */storage/\{name}* 
to */storage/create_update*

> Missing storage.json REST endpoint.
> ---
>
> Key: DRILL-7186
> URL: https://issues.apache.org/jira/browse/DRILL-7186
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Arina Ielchiieva
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Steps:*
> 1. Open page: http://:8047/storage.json
> *Expected result:*
> storage.json is opened:
> {noformat}
> [ {
>   "name" : "cp",
>   "config" : {
> "type" : "file",
> "connection" : "classpath:///",
> "config" : null,
> "workspaces" : { },
> "formats" : {
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "dfs",
>   "config" : {
> "type" : "file",
> "connection" : "file:///",
> "config" : null,
> "workspaces" : {
>   "tmp" : {
> "location" : "/tmp",
> "writable" : true,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   },
>   "root" : {
> "location" : "/",
> "writable" : false,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   }
> },
> "formats" : {
>   "psv" : {
> "type" : "text",
> "extensions" : [ "tbl" ],
> "delimiter" : "|"
>   },
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "httpd" : {
> "type" : "httpd",
> "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "pcap" : {
> "type" : "pcap"
>   },
>   "pcapng" : {
> "type" : "pcapng",
> "extensions" : [ "pcapng" ]
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "sequencefile" : {
> "type" : "sequencefile",
> "extensions" : [ "seq" ]
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> 

[jira] [Commented] (DRILL-7186) Missing storage.json REST endpoint.

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822052#comment-16822052
 ] 

Sorabh Hamirwasia commented on DRILL-7186:
--

@vdiravka / [~arina]

With DRILL-6562 I am seeing additional changes which is not supporting older 
requests URL. For example:

1) Earlier export of plugin config was done for json format by default using 
URL:  */storage/\{name}/export* and now the new URL is 
*/storage/\{name}/export/\{format}*. This means the older one is not supported 
anymore. Is it intended or should we treat the format as JSON by default if not 
provided ?

2) The POST URL to create and update plugin is changed from */storage/\{name}* 
to */storage/create_update*

> Missing storage.json REST endpoint.
> ---
>
> Key: DRILL-7186
> URL: https://issues.apache.org/jira/browse/DRILL-7186
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Arina Ielchiieva
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Steps:*
> 1. Open page: http://:8047/storage.json
> *Expected result:*
> storage.json is opened:
> {noformat}
> [ {
>   "name" : "cp",
>   "config" : {
> "type" : "file",
> "connection" : "classpath:///",
> "config" : null,
> "workspaces" : { },
> "formats" : {
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "dfs",
>   "config" : {
> "type" : "file",
> "connection" : "file:///",
> "config" : null,
> "workspaces" : {
>   "tmp" : {
> "location" : "/tmp",
> "writable" : true,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   },
>   "root" : {
> "location" : "/",
> "writable" : false,
> "defaultInputFormat" : null,
> "allowAccessOutsideWorkspace" : false
>   }
> },
> "formats" : {
>   "psv" : {
> "type" : "text",
> "extensions" : [ "tbl" ],
> "delimiter" : "|"
>   },
>   "csv" : {
> "type" : "text",
> "extensions" : [ "csv" ],
> "delimiter" : ","
>   },
>   "tsv" : {
> "type" : "text",
> "extensions" : [ "tsv" ],
> "delimiter" : "\t"
>   },
>   "httpd" : {
> "type" : "httpd",
> "logFormat" : "%h %t \"%r\" %>s %b \"%{Referer}i\""
>   },
>   "parquet" : {
> "type" : "parquet"
>   },
>   "json" : {
> "type" : "json",
> "extensions" : [ "json" ]
>   },
>   "pcap" : {
> "type" : "pcap"
>   },
>   "pcapng" : {
> "type" : "pcapng",
> "extensions" : [ "pcapng" ]
>   },
>   "avro" : {
> "type" : "avro"
>   },
>   "sequencefile" : {
> "type" : "sequencefile",
> "extensions" : [ "seq" ]
>   },
>   "csvh" : {
> "type" : "text",
> "extensions" : [ "csvh" ],
> "extractHeader" : true,
> "delimiter" : ","
>   },
>   "image" : {
> "type" : "image",
> "extensions" : [ "jpg", "jpeg", "jpe", "tif", "tiff", "dng", "psd", 
> "png", "bmp", "gif", "ico", "pcx", "wav", "wave", "avi", "webp", "mov", 
> "mp4", "m4a", "m4p", "m4b", "m4r", "m4v", "3gp", "3g2", "eps", "epsf", 
> "epsi", "ai", "arw", "crw", "cr2", "nef", "orf", "raf", "rw2", "rwl", "srw", 
> "x3f" ]
>   }
> },
> "enabled" : true
>   }
> }, {
>   "name" : "hbase",
>   "config" : {
> "type" : "hbase",
> "config" : {
>   "hbase.zookeeper.quorum" : "localhost",
>   "hbase.zookeeper.property.clientPort" : "2181"
> },
> "size.calculator.enabled" : false,
> "enabled" : false
>   }
> }, {
>   "name" : "hive",
>   "config" : {
> "type" : "hive",
> "configProps" : {
>   "hive.metastore.uris" 

[jira] [Updated] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7185:
-
Labels: ready-to-commit  (was: )

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7185:
-
Reviewer: Sorabh Hamirwasia

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-19 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822012#comment-16822012
 ] 

Sorabh Hamirwasia commented on DRILL-7185:
--

[~cgivre] - Please move the Jira to Reviewable state

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-5509) Upgrade Drill protobuf support from 2.5.0 to latest 3.3

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-5509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821469#comment-16821469
 ] 

Sorabh Hamirwasia commented on DRILL-5509:
--

Protobuf upgrade is reverted in 1.16 and is planned for 1.17 release

> Upgrade Drill protobuf support from 2.5.0 to latest 3.3
> ---
>
> Key: DRILL-5509
> URL: https://issues.apache.org/jira/browse/DRILL-5509
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.10.0
>Reporter: Paul Rogers
>Assignee: Anton Gozhiy
>Priority: Minor
> Fix For: 1.16.0
>
>
> Drill uses Google Protobufs for RPC. Drill's Maven compile requires version 
> 2.5.0 from Feb. 2013. The latest version is 3.3. Over time, it may become 
> increasingly hard to find and build a four-year-old version.
> Upgrade Drill to use the latest Protobuf version. This will require updating 
> the Maven protobuf plugin, and may require other upgrades as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7105) Error while building the Drill native client

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7105:
-
Fix Version/s: 1.17.0

> Error while building the Drill native client
> 
>
> Key: DRILL-7105
> URL: https://issues.apache.org/jira/browse/DRILL-7105
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0, 1.17.0
>
>
> *Steps:*
>  # cd contrib/native/client
>  # mkdir build
>  # cd build && cmake -std=c++11 -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug 
> ..
>  # make
> *Expected result:*
>  The native client is built successfully.
> *Actual result:*
>  Error happens:
> {noformat}
> [  4%] Built target y2038
> [  7%] Building CXX object 
> src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o
> In file included from /usr/include/c++/5/mutex:35:0,
>  from /usr/local/include/google/protobuf/stubs/mutex.h:33,
>  from /usr/local/include/google/protobuf/stubs/common.h:52,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file 
> requires compiler and library support for the ISO C++ 2011 standard. This 
> support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
>  #error This file requires compiler and library support \
>   ^
> In file included from /usr/local/include/google/protobuf/stubs/common.h:52:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/local/include/google/protobuf/stubs/mutex.h:58:8: error: 'mutex' in 
> namespace 'std' does not name a type
>std::mutex mu_;
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Lock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:51:17: error: 'mu_' was not 
> declared in this scope
>void Lock() { mu_.lock(); }
>  ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Unlock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:52:19: error: 'mu_' was not 
> declared in this scope
>void Unlock() { mu_.unlock(); }
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:61:7: error: expected 
> nested-name-specifier before 'Mutex'
>  using Mutex = WrappedMutex;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h:66:28: error: expected ')' 
> before '*' token
>explicit MutexLock(Mutex *mu) : mu_(mu) { this->mu_->Lock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h:69:3: error: 'Mutex' does 
> not name a type
>Mutex *const mu_;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: In destructor 
> 'google::protobuf::internal::MutexLock::~MutexLock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:67:24: error: 'class 
> google::protobuf::internal::MutexLock' has no member named 'mu_'
>~MutexLock() { this->mu_->Unlock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:80:33: error: expected ')' 
> before '*' token
>explicit MutexLockMaybe(Mutex *mu) :
>  ^
> In file included from /usr/local/include/google/protobuf/arena.h:48:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:23,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected unqualified-id before end 
> of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected declaration before end of 
> line
> src/protobuf/CMakeFiles/protomsgs.dir/build.make:62: recipe for target 
> 'src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o' failed
> make[2]: *** 

[jira] [Commented] (DRILL-7105) Error while building the Drill native client

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821423#comment-16821423
 ] 

Sorabh Hamirwasia commented on DRILL-7105:
--

This change is reverted in 1.16.0 branch

> Error while building the Drill native client
> 
>
> Key: DRILL-7105
> URL: https://issues.apache.org/jira/browse/DRILL-7105
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - C++
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0, 1.17.0
>
>
> *Steps:*
>  # cd contrib/native/client
>  # mkdir build
>  # cd build && cmake -std=c++11 -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug 
> ..
>  # make
> *Expected result:*
>  The native client is built successfully.
> *Actual result:*
>  Error happens:
> {noformat}
> [  4%] Built target y2038
> [  7%] Building CXX object 
> src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o
> In file included from /usr/include/c++/5/mutex:35:0,
>  from /usr/local/include/google/protobuf/stubs/mutex.h:33,
>  from /usr/local/include/google/protobuf/stubs/common.h:52,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file 
> requires compiler and library support for the ISO C++ 2011 standard. This 
> support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
>  #error This file requires compiler and library support \
>   ^
> In file included from /usr/local/include/google/protobuf/stubs/common.h:52:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/local/include/google/protobuf/stubs/mutex.h:58:8: error: 'mutex' in 
> namespace 'std' does not name a type
>std::mutex mu_;
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Lock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:51:17: error: 'mu_' was not 
> declared in this scope
>void Lock() { mu_.lock(); }
>  ^
> /usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
> google::protobuf::internal::WrappedMutex::Unlock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:52:19: error: 'mu_' was not 
> declared in this scope
>void Unlock() { mu_.unlock(); }
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:61:7: error: expected 
> nested-name-specifier before 'Mutex'
>  using Mutex = WrappedMutex;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h:66:28: error: expected ')' 
> before '*' token
>explicit MutexLock(Mutex *mu) : mu_(mu) { this->mu_->Lock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h:69:3: error: 'Mutex' does 
> not name a type
>Mutex *const mu_;
>^
> /usr/local/include/google/protobuf/stubs/mutex.h: In destructor 
> 'google::protobuf::internal::MutexLock::~MutexLock()':
> /usr/local/include/google/protobuf/stubs/mutex.h:67:24: error: 'class 
> google::protobuf::internal::MutexLock' has no member named 'mu_'
>~MutexLock() { this->mu_->Unlock(); }
> ^
> /usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
> /usr/local/include/google/protobuf/stubs/mutex.h:80:33: error: expected ')' 
> before '*' token
>explicit MutexLockMaybe(Mutex *mu) :
>  ^
> In file included from /usr/local/include/google/protobuf/arena.h:48:0,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:23,
>  from 
> /home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected unqualified-id before end 
> of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
> /usr/include/c++/5/typeinfo:39:37: error: expected declaration before end of 
> line
> src/protobuf/CMakeFiles/protomsgs.dir/build.make:62: recipe for target 
> 'src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o' failed

[jira] [Commented] (DRILL-6642) Update protocol-buffers version

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821422#comment-16821422
 ] 

Sorabh Hamirwasia commented on DRILL-6642:
--

This change is reverted into 1.16 version

> Update protocol-buffers version
> ---
>
> Key: DRILL-6642
> URL: https://issues.apache.org/jira/browse/DRILL-6642
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Anton Gozhiy
>Priority: Major
> Fix For: 1.16.0, 1.17.0
>
>
> Currently Drill uses 2.5.0 {{protocol-buffers}} version.
>  The last version is 3.6.0 in maven repo: 
> [https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java]
> The new version has a lot of useful enhancements, which can be used in Drill.
>  One of them is using {{UNRECOGNIZED Enum NullValue}}, which can help to 
> handle them in place of null values for {{ProtocolMessageEnum}} - DRILL-6639. 
>  Looks like the NullValue can be used instead of null returned from 
> {{valueOf()}} (_or {{forNumber()}}, since {{valueOf()}} is deprecated in the 
> newer protobuf version_):
>  
> [https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/NullValue]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-6642) Update protocol-buffers version

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6642:
-
Fix Version/s: 1.17.0

> Update protocol-buffers version
> ---
>
> Key: DRILL-6642
> URL: https://issues.apache.org/jira/browse/DRILL-6642
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.14.0
>Reporter: Vitalii Diravka
>Assignee: Anton Gozhiy
>Priority: Major
> Fix For: 1.16.0, 1.17.0
>
>
> Currently Drill uses 2.5.0 {{protocol-buffers}} version.
>  The last version is 3.6.0 in maven repo: 
> [https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java]
> The new version has a lot of useful enhancements, which can be used in Drill.
>  One of them is using {{UNRECOGNIZED Enum NullValue}}, which can help to 
> handle them in place of null values for {{ProtocolMessageEnum}} - DRILL-6639. 
>  Looks like the NullValue can be used instead of null returned from 
> {{valueOf()}} (_or {{forNumber()}}, since {{valueOf()}} is deprecated in the 
> newer protobuf version_):
>  
> [https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/NullValue]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7185:
-
Affects Version/s: (was: 1.15.0)
   1.11.0

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Blocker
> Fix For: 1.16.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7185:
-
Priority: Major  (was: Blocker)

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
> Fix For: 1.16.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7185) Drill Fails to Read Large Packets

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7185:
-
Fix Version/s: (was: 1.16.0)
   1.17.0

> Drill Fails to Read Large Packets
> -
>
> Key: DRILL-7185
> URL: https://issues.apache.org/jira/browse/DRILL-7185
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
> Fix For: 1.17.0
>
>
> Drill fails to read large packets and crashes.  This small fix corrects that 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7183) TPCDS query 10, 35, 69 take longer with sf 1000 when Statistics are disabled

2019-04-18 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7183:
-
Labels: ready-to-commit  (was: )

> TPCDS query 10, 35, 69 take longer with sf 1000 when Statistics are disabled
> 
>
> Key: DRILL-7183
> URL: https://issues.apache.org/jira/browse/DRILL-7183
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning  Optimization
>Affects Versions: 1.16.0
>Reporter: Robert Hou
>Assignee: Hanumath Rao Maduri
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Query 69 runs 150% slower when Statistics is disabled.  Here is the query:
> {noformat}
> SELECT
>   cd_gender,
>   cd_marital_status,
>   cd_education_status,
>   count(*) cnt1,
>   cd_purchase_estimate,
>   count(*) cnt2,
>   cd_credit_rating,
>   count(*) cnt3
> FROM
>   customer c, customer_address ca, customer_demographics
> WHERE
>   c.c_current_addr_sk = ca.ca_address_sk AND
> ca_state IN ('KY', 'GA', 'NM') AND
> cd_demo_sk = c.c_current_cdemo_sk AND
> exists(SELECT *
>FROM store_sales, date_dim
>WHERE c.c_customer_sk = ss_customer_sk AND
>  ss_sold_date_sk = d_date_sk AND
>  d_year = 2001 AND
>  d_moy BETWEEN 4 AND 4 + 2) AND
> (NOT exists(SELECT *
> FROM web_sales, date_dim
> WHERE c.c_customer_sk = ws_bill_customer_sk AND
>   ws_sold_date_sk = d_date_sk AND
>   d_year = 2001 AND
>   d_moy BETWEEN 4 AND 4 + 2) AND
>   NOT exists(SELECT *
>  FROM catalog_sales, date_dim
>  WHERE c.c_customer_sk = cs_ship_customer_sk AND
>cs_sold_date_sk = d_date_sk AND
>d_year = 2001 AND
>d_moy BETWEEN 4 AND 4 + 2))
> GROUP BY cd_gender, cd_marital_status, cd_education_status,
>   cd_purchase_estimate, cd_credit_rating
> ORDER BY cd_gender, cd_marital_status, cd_education_status,
>   cd_purchase_estimate, cd_credit_rating
> LIMIT 100;
> {noformat}
> This regression is caused by commit 982e98061e029a39f1c593f695c0d93ec7079f0d. 
>  This commit should be reverted for now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7165) Redundant Checksum calculating for ASC files

2019-04-11 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7165:
-
Reviewer: Sorabh Hamirwasia

> Redundant Checksum calculating for ASC files
> 
>
> Key: DRILL-7165
> URL: https://issues.apache.org/jira/browse/DRILL-7165
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Currently {{checksum-maven-plugin}} creates sha-512 checksum files for tar an 
> zip archives and for ASC (signature) files. The last is redundant. For 
> example:
> apache-drill-1.15.0-src.tar.gz.asc.sha512
> apache-drill-1.15.0-src.zip.asc.sha512
> apache-drill-1.15.0.tar.gz.asc.sha512
> The proper list of files: 
> [http://home.apache.org/~vitalii/drill/releases/1.15.0/rc2/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-7165) Redundant Checksum calculating for ASC files

2019-04-11 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia resolved DRILL-7165.
--
Resolution: Fixed

> Redundant Checksum calculating for ASC files
> 
>
> Key: DRILL-7165
> URL: https://issues.apache.org/jira/browse/DRILL-7165
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Currently {{checksum-maven-plugin}} creates sha-512 checksum files for tar an 
> zip archives and for ASC (signature) files. The last is redundant. For 
> example:
> apache-drill-1.15.0-src.tar.gz.asc.sha512
> apache-drill-1.15.0-src.zip.asc.sha512
> apache-drill-1.15.0.tar.gz.asc.sha512
> The proper list of files: 
> [http://home.apache.org/~vitalii/drill/releases/1.15.0/rc2/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7165) Redundant Checksum calculating for ASC files

2019-04-11 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7165:
-
Labels: ready-to-commit  (was: )

> Redundant Checksum calculating for ASC files
> 
>
> Key: DRILL-7165
> URL: https://issues.apache.org/jira/browse/DRILL-7165
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Currently {{checksum-maven-plugin}} creates sha-512 checksum files for tar an 
> zip archives and for ASC (signature) files. The last is redundant. For 
> example:
> apache-drill-1.15.0-src.tar.gz.asc.sha512
> apache-drill-1.15.0-src.zip.asc.sha512
> apache-drill-1.15.0.tar.gz.asc.sha512
> The proper list of files: 
> [http://home.apache.org/~vitalii/drill/releases/1.15.0/rc2/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7170) IllegalStateException: Record count not set for this vector container

2019-04-11 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7170:


 Summary: IllegalStateException: Record count not set for this 
vector container
 Key: DRILL-7170
 URL: https://issues.apache.org/jira/browse/DRILL-7170
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Reporter: Sorabh Hamirwasia
 Fix For: 1.17.0



{code:java}
Query: 
/root/drillAutomation/master/framework/resources/Advanced/tpcds/tpcds_sf1/original/maprdb/json/query95.sql
WITH ws_wh AS
(
SELECT ws1.ws_order_number,
ws1.ws_warehouse_sk wh1,
ws2.ws_warehouse_sk wh2
FROM   web_sales ws1,
web_sales ws2
WHERE  ws1.ws_order_number = ws2.ws_order_number
ANDws1.ws_warehouse_sk <> ws2.ws_warehouse_sk)
SELECT
Count(DISTINCT ws_order_number) AS `order count` ,
Sum(ws_ext_ship_cost)   AS `total shipping cost` ,
Sum(ws_net_profit)  AS `total net profit`
FROM web_sales ws1 ,
date_dim ,
customer_address ,
web_site
WHEREd_date BETWEEN '2000-04-01' AND  (
Cast('2000-04-01' AS DATE) + INTERVAL '60' day)
AND  ws1.ws_ship_date_sk = d_date_sk
AND  ws1.ws_ship_addr_sk = ca_address_sk
AND  ca_state = 'IN'
AND  ws1.ws_web_site_sk = web_site_sk
AND  web_company_name = 'pri'
AND  ws1.ws_order_number IN
(
SELECT ws_order_number
FROM   ws_wh)
AND  ws1.ws_order_number IN
(
SELECT wr_order_number
FROM   web_returns,
ws_wh
WHERE  wr_order_number = ws_wh.ws_order_number)
ORDER BY count(DISTINCT ws_order_number)
LIMIT 100

Exception:

java.sql.SQLException: SYSTEM ERROR: IllegalStateException: Record count not 
set for this vector container

Fragment 2:3

Please, refer to logs for more information.

[Error Id: 4ed92fce-505b-40ba-ac0e-4a302c28df47 on drill87:31010]

  (java.lang.IllegalStateException) Record count not set for this vector 
container

org.apache.drill.shaded.guava.com.google.common.base.Preconditions.checkState():459
org.apache.drill.exec.record.VectorContainer.getRecordCount():394
org.apache.drill.exec.record.RecordBatchSizer.():720
org.apache.drill.exec.record.RecordBatchSizer.():704

org.apache.drill.exec.physical.impl.common.HashTableTemplate$BatchHolder.getActualSize():462

org.apache.drill.exec.physical.impl.common.HashTableTemplate.getActualSize():964

org.apache.drill.exec.physical.impl.common.HashTableTemplate.makeDebugString():973

org.apache.drill.exec.physical.impl.common.HashPartition.makeDebugString():601

org.apache.drill.exec.physical.impl.join.HashJoinBatch.makeDebugString():1313

org.apache.drill.exec.physical.impl.join.HashJoinBatch.executeBuildPhase():1105
org.apache.drill.exec.physical.impl.join.HashJoinBatch.innerNext():525
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.test.generated.HashAggregatorGen1068899.doWork():642
org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():296
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.physical.impl.BaseRootExec.next():104

org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():93
org.apache.drill.exec.physical.impl.BaseRootExec.next():94
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
java.security.AccessController.doPrivileged():-2
javax.security.auth.Subject.doAs():422
org.apache.hadoop.security.UserGroupInformation.doAs():1669
org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1149
java.util.concurrent.ThreadPoolExecutor$Worker.run():624
java.lang.Thread.run():748

at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:538)
at org.apache.drill.jdbc.impl.DrillCursor.next(DrillCursor.java:642)
at 
oadd.org.apache.calcite.avatica.AvaticaResultSet.next(AvaticaResultSet.java:217)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.next(DrillResultSetImpl.java:148)
 

[jira] [Updated] (DRILL-7166) Tests doing count(* ) with wildcards in table name are querying metadata cache and returning wrong results

2019-04-10 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7166:
-
Priority: Blocker  (was: Critical)

> Tests doing count(* ) with wildcards in table name are querying metadata 
> cache and returning wrong results
> --
>
> Key: DRILL-7166
> URL: https://issues.apache.org/jira/browse/DRILL-7166
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.16.0
>Reporter: Abhishek Girish
>Assignee: Pritesh Maker
>Priority: Blocker
> Fix For: 1.16.0
>
>
> Tests:
> {code}
> Functional/metadata_caching/data/drill4376_1.q
> Functional/metadata_caching/data/drill4376_2.q
> Functional/metadata_caching/data/drill4376_3.q
> Functional/metadata_caching/data/drill4376_4.q
> Functional/metadata_caching/data/drill4376_5.q
> Functional/metadata_caching/data/drill4376_6.q
> Functional/metadata_caching/data/drill4376_8.q
> {code}
> Example pattern of queries:
> {code}
> select count(*) from `lineitem_hierarchical_intint/*8*/3*`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7062) Run-time row group pruning

2019-04-10 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7062:
-
Fix Version/s: (was: 1.16.0)
   1.17.0

> Run-time row group pruning
> --
>
> Key: DRILL-7062
> URL: https://issues.apache.org/jira/browse/DRILL-7062
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Boaz Ben-Zvi
>Priority: Major
> Fix For: 1.17.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7028) Reduce the planning time of queries on large Parquet tables with large metadata cache files

2019-04-10 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7028:
-
Fix Version/s: 1.17.0

> Reduce the planning time of queries on large Parquet tables with large 
> metadata cache files
> ---
>
> Key: DRILL-7028
> URL: https://issues.apache.org/jira/browse/DRILL-7028
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: performance
> Fix For: 1.16.0, 1.17.0
>
>
> If the Parquet table has a large number of small files, the metadata cache 
> files grow larger and the planner tries to read the large metadata cache file 
> which leads to the planning time overhead. Most of the time of execution is 
> spent during the planning phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7089) Implement caching of BaseMetadata classes

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7089:
-
Component/s: Metadata

> Implement caching of BaseMetadata classes
> -
>
> Key: DRILL-7089
> URL: https://issues.apache.org/jira/browse/DRILL-7089
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> In the scope of DRILL-6852 were introduced new classes for metadata usage. 
> These classes may be reused in other GroupScan instances to preserve heap 
> usage for the case when metadata is large.
> The idea is to store {{BaseMetadata}} inheritors in {{DrillTable}} and pass 
> them to the {{GroupScan}}, so in the scope of the single query, it will be 
> possible to reuse them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7065) Ensure backward compatibility is maintained

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813660#comment-16813660
 ] 

Sorabh Hamirwasia commented on DRILL-7065:
--

Merged with DRILL-7063

> Ensure backward compatibility is maintained 
> 
>
> Key: DRILL-7065
> URL: https://issues.apache.org/jira/browse/DRILL-7065
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7065) Ensure backward compatibility is maintained

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7065:
-
Reviewer: Volodymyr Vysotskyi

> Ensure backward compatibility is maintained 
> 
>
> Key: DRILL-7065
> URL: https://issues.apache.org/jira/browse/DRILL-7065
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.16.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7065) Ensure backward compatibility is maintained

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7065:
-
Labels: ready-to-commit  (was: )

> Ensure backward compatibility is maintained 
> 
>
> Key: DRILL-7065
> URL: https://issues.apache.org/jira/browse/DRILL-7065
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7066) Auto-refresh should pick up existing columns from metadata cache

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7066:
-
Labels: ready-to-commit  (was: )

> Auto-refresh should pick up existing columns from metadata cache
> 
>
> Key: DRILL-7066
> URL: https://issues.apache.org/jira/browse/DRILL-7066
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7066) Auto-refresh should pick up existing columns from metadata cache

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7066:
-
Reviewer: Aman Sinha

> Auto-refresh should pick up existing columns from metadata cache
> 
>
> Key: DRILL-7066
> URL: https://issues.apache.org/jira/browse/DRILL-7066
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
> Fix For: 1.16.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7066) Auto-refresh should pick up existing columns from metadata cache

2019-04-09 Thread Sorabh Hamirwasia (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16813656#comment-16813656
 ] 

Sorabh Hamirwasia commented on DRILL-7066:
--

Merged with DRILL-7063

> Auto-refresh should pick up existing columns from metadata cache
> 
>
> Key: DRILL-7066
> URL: https://issues.apache.org/jira/browse/DRILL-7066
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Metadata
>Reporter: Venkata Jyothsna Donapati
>Assignee: Venkata Jyothsna Donapati
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7154) TPCH query 4, 17 and 18 take longer with sf 1000 when Statistics are disabled

2019-04-08 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7154:
-
Reviewer: Boaz Ben-Zvi

> TPCH query 4, 17 and 18 take longer with sf 1000 when Statistics are disabled
> -
>
> Key: DRILL-7154
> URL: https://issues.apache.org/jira/browse/DRILL-7154
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning  Optimization
>Affects Versions: 1.16.0
>Reporter: Robert Hou
>Assignee: Hanumath Rao Maduri
>Priority: Blocker
> Fix For: 1.16.0
>
> Attachments: 235a3ed4-e3d1-f3b7-39c5-fc947f56b6d5.sys.drill, 
> 235a471b-aa97-bfb5-207d-3f25b4b5fbbb.sys.drill, hashagg.nostats.data.log, 
> hashagg.nostats.foreman.log, hashagg.stats.disabled.data.log, 
> hashagg.stats.disabled.foreman.log
>
>
> Here is TPCH 04 with sf 1000:
> {noformat}
> select
>   o.o_orderpriority,
>   count(*) as order_count
> from
>   orders o
> where
>   o.o_orderdate >= date '1996-10-01'
>   and o.o_orderdate < date '1996-10-01' + interval '3' month
>   and 
>   exists (
> select
>   *
> from
>   lineitem l
> where
>   l.l_orderkey = o.o_orderkey
>   and l.l_commitdate < l.l_receiptdate
>   )
> group by
>   o.o_orderpriority
> order by
>   o.o_orderpriority;
> {noformat}
> TPCH query 4 takes 30% longer.  The plan is the same.  But the Hash Agg 
> operator in the new plan is taking longer.  One possible reason is that the 
> Hash Agg operator in the new plan is not using as many buckets as the old 
> plan did.  The memory usage of the Hash Agg operator in the new plan is using 
> less memory compared to the old plan.
> Here is the old plan:
> {noformat}
> 00-00Screen : rowType = RecordType(ANY o_orderpriority, BIGINT 
> order_count): rowcount = 375.0, cumulative cost = {1.9163601940441746E10 
> rows, 9.07316867594483E10 cpu, 2.2499969127E10 io, 3.59423968386048E12 
> network, 2.2631985057468002E10 memory}, id = 5645
> 00-01  Project(o_orderpriority=[$0], order_count=[$1]) : rowType = 
> RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 375.0, 
> cumulative cost = {1.9163226940441746E10 rows, 9.07313117594483E10 cpu, 
> 2.2499969127E10 io, 3.59423968386048E12 network, 2.2631985057468002E10 
> memory}, id = 5644
> 00-02SingleMergeExchange(sort0=[0]) : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9159476940441746E10 rows, 9.07238117594483E10 cpu, 2.2499969127E10 io, 
> 3.59423968386048E12 network, 2.2631985057468002E10 memory}, id = 5643
> 01-01  OrderedMuxExchange(sort0=[0]) : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9155726940441746E10 rows, 9.0643982838025E10 cpu, 2.2499969127E10 io, 
> 3.56351968386048E12 network, 2.2631985057468002E10 memory}, id = 5642
> 02-01SelectionVectorRemover : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9151976940441746E10 rows, 9.0640232838025E10 cpu, 2.2499969127E10 io, 
> 3.56351968386048E12 network, 2.2631985057468002E10 memory}, id = 5641
> 02-02  Sort(sort0=[$0], dir0=[ASC]) : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9148226940441746E10 rows, 9.0636482838025E10 cpu, 2.2499969127E10 io, 
> 3.56351968386048E12 network, 2.2631985057468002E10 memory}, id = 5640
> 02-03HashAgg(group=[{0}], order_count=[$SUM0($1)]) : rowType 
> = RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 375.0, 
> cumulative cost = {1.9144476940441746E10 rows, 9.030890595055101E10 cpu, 
> 2.2499969127E10 io, 3.56351968386048E12 network, 2.2571985057468002E10 
> memory}, id = 5639
> 02-04  HashToRandomExchange(dist0=[[$0]]) : rowType = 
> RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 3.75E7, 
> cumulative cost = {1.9106976940441746E10 rows, 8.955890595055101E10 cpu, 
> 2.2499969127E10 io, 3.56351968386048E12 network, 2.1911985057468002E10 
> memory}, id = 5638
> 03-01HashAgg(group=[{0}], order_count=[COUNT()]) : 
> rowType = RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 
> 3.75E7, cumulative cost = {1.9069476940441746E10 rows, 8.895890595055101E10 
> cpu, 2.2499969127E10 io, 3.25631968386048E12 network, 2.1911985057468002E10 
> memory}, id = 5637
> 03-02  Project(o_orderpriority=[$1]) : rowType = 
> RecordType(ANY o_orderpriority): rowcount = 3.75E8, cumulative cost = 
> {1.8694476940441746E10 rows, 8.145890595055101E10 cpu, 2.2499969127E10 io, 
> 3.25631968386048E12 network, 1.5311985057468002E10 memory}, id 

[jira] [Updated] (DRILL-7154) TPCH query 4, 17 and 18 take longer with sf 1000 when Statistics are disabled

2019-04-08 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7154:
-
Labels: ready-to-commit  (was: )

> TPCH query 4, 17 and 18 take longer with sf 1000 when Statistics are disabled
> -
>
> Key: DRILL-7154
> URL: https://issues.apache.org/jira/browse/DRILL-7154
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning  Optimization
>Affects Versions: 1.16.0
>Reporter: Robert Hou
>Assignee: Hanumath Rao Maduri
>Priority: Blocker
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
> Attachments: 235a3ed4-e3d1-f3b7-39c5-fc947f56b6d5.sys.drill, 
> 235a471b-aa97-bfb5-207d-3f25b4b5fbbb.sys.drill, hashagg.nostats.data.log, 
> hashagg.nostats.foreman.log, hashagg.stats.disabled.data.log, 
> hashagg.stats.disabled.foreman.log
>
>
> Here is TPCH 04 with sf 1000:
> {noformat}
> select
>   o.o_orderpriority,
>   count(*) as order_count
> from
>   orders o
> where
>   o.o_orderdate >= date '1996-10-01'
>   and o.o_orderdate < date '1996-10-01' + interval '3' month
>   and 
>   exists (
> select
>   *
> from
>   lineitem l
> where
>   l.l_orderkey = o.o_orderkey
>   and l.l_commitdate < l.l_receiptdate
>   )
> group by
>   o.o_orderpriority
> order by
>   o.o_orderpriority;
> {noformat}
> TPCH query 4 takes 30% longer.  The plan is the same.  But the Hash Agg 
> operator in the new plan is taking longer.  One possible reason is that the 
> Hash Agg operator in the new plan is not using as many buckets as the old 
> plan did.  The memory usage of the Hash Agg operator in the new plan is using 
> less memory compared to the old plan.
> Here is the old plan:
> {noformat}
> 00-00Screen : rowType = RecordType(ANY o_orderpriority, BIGINT 
> order_count): rowcount = 375.0, cumulative cost = {1.9163601940441746E10 
> rows, 9.07316867594483E10 cpu, 2.2499969127E10 io, 3.59423968386048E12 
> network, 2.2631985057468002E10 memory}, id = 5645
> 00-01  Project(o_orderpriority=[$0], order_count=[$1]) : rowType = 
> RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 375.0, 
> cumulative cost = {1.9163226940441746E10 rows, 9.07313117594483E10 cpu, 
> 2.2499969127E10 io, 3.59423968386048E12 network, 2.2631985057468002E10 
> memory}, id = 5644
> 00-02SingleMergeExchange(sort0=[0]) : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9159476940441746E10 rows, 9.07238117594483E10 cpu, 2.2499969127E10 io, 
> 3.59423968386048E12 network, 2.2631985057468002E10 memory}, id = 5643
> 01-01  OrderedMuxExchange(sort0=[0]) : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9155726940441746E10 rows, 9.0643982838025E10 cpu, 2.2499969127E10 io, 
> 3.56351968386048E12 network, 2.2631985057468002E10 memory}, id = 5642
> 02-01SelectionVectorRemover : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9151976940441746E10 rows, 9.0640232838025E10 cpu, 2.2499969127E10 io, 
> 3.56351968386048E12 network, 2.2631985057468002E10 memory}, id = 5641
> 02-02  Sort(sort0=[$0], dir0=[ASC]) : rowType = RecordType(ANY 
> o_orderpriority, BIGINT order_count): rowcount = 375.0, cumulative cost = 
> {1.9148226940441746E10 rows, 9.0636482838025E10 cpu, 2.2499969127E10 io, 
> 3.56351968386048E12 network, 2.2631985057468002E10 memory}, id = 5640
> 02-03HashAgg(group=[{0}], order_count=[$SUM0($1)]) : rowType 
> = RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 375.0, 
> cumulative cost = {1.9144476940441746E10 rows, 9.030890595055101E10 cpu, 
> 2.2499969127E10 io, 3.56351968386048E12 network, 2.2571985057468002E10 
> memory}, id = 5639
> 02-04  HashToRandomExchange(dist0=[[$0]]) : rowType = 
> RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 3.75E7, 
> cumulative cost = {1.9106976940441746E10 rows, 8.955890595055101E10 cpu, 
> 2.2499969127E10 io, 3.56351968386048E12 network, 2.1911985057468002E10 
> memory}, id = 5638
> 03-01HashAgg(group=[{0}], order_count=[COUNT()]) : 
> rowType = RecordType(ANY o_orderpriority, BIGINT order_count): rowcount = 
> 3.75E7, cumulative cost = {1.9069476940441746E10 rows, 8.895890595055101E10 
> cpu, 2.2499969127E10 io, 3.25631968386048E12 network, 2.1911985057468002E10 
> memory}, id = 5637
> 03-02  Project(o_orderpriority=[$1]) : rowType = 
> RecordType(ANY o_orderpriority): rowcount = 3.75E8, cumulative cost = 
> {1.8694476940441746E10 rows, 8.145890595055101E10 cpu, 2.2499969127E10 io, 
> 

[jira] [Updated] (DRILL-7045) UDF string_binary java.lang.IndexOutOfBoundsException:

2019-04-07 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7045:
-
Labels: ready-to-commit  (was: )

> UDF string_binary java.lang.IndexOutOfBoundsException:
> --
>
> Key: DRILL-7045
> URL: https://issues.apache.org/jira/browse/DRILL-7045
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.15.0
>Reporter: jean-claude
>Assignee: jean-claude
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Given a large field like
>  
> cat input.json
> { "col0": 
> 

[jira] [Updated] (DRILL-7145) Exceptions happened during retrieving values from ValueVector are not being displayed at the Drill Web UI

2019-04-05 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7145:
-
Component/s: Web Server

> Exceptions happened during retrieving values from ValueVector are not being 
> displayed at the Drill Web UI
> -
>
> Key: DRILL-7145
> URL: https://issues.apache.org/jira/browse/DRILL-7145
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Anton Gozhiy
>Assignee: Anton Gozhiy
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Data:*
> A text file with the following content:
> {noformat}
> Id,col1,col2
> 1,aaa,bbb
> 2,ccc,ddd
> 3,eee
> 4,fff,ggg
> {noformat}
> Note that the record with id 3 has not value for the third column.
> exec.storage.enable_v3_text_reader should be false.
> *Submit the query from the Web UI:*
> {code:sql}
> select * from 
> table(dfs.tmp.`/drill/text/test`(type=>'text',lineDelimiter=>'\n',fieldDelimiter=>',',extractHeader=>true))
> {code}
> *Expected result:*
> Exception should happen due to DRILL-4814. It should be properly displayed.
> *Actual result:*
> Incorrect data is returned but without error. Query status: success.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7146) Query failing with NPE when ZK queue is enabled

2019-04-02 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7146:
-
Labels: ready-to-commit  (was: )

> Query failing with NPE when ZK queue is enabled
> ---
>
> Key: DRILL-7146
> URL: https://issues.apache.org/jira/browse/DRILL-7146
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning  Optimization
>Affects Versions: 1.16.0
>Reporter: Sorabh Hamirwasia
>Assignee: Hanumath Rao Maduri
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
>  
> {code:java}
> >> Query: alter system reset all;
>  SYSTEM ERROR: NullPointerException
> Please, refer to logs for more information.
> [Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
>  java.sql.SQLException: SYSTEM ERROR: NullPointerException
> Please, refer to logs for more information.
> [Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
>  at 
> org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:535)
>  at 
> org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:607)
>  at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1278)
>  at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:58)
>  at 
> oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:667)
>  at 
> org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1107)
>  at 
> org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1118)
>  at 
> oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
>  at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:200)
>  at 
> oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>  at 
> oadd.org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
>  at org.apache.drill.test.framework.Utils.execSQL(Utils.java:917)
>  at org.apache.drill.test.framework.TestDriver.setup(TestDriver.java:632)
>  at org.apache.drill.test.framework.TestDriver.runTests(TestDriver.java:152)
>  at org.apache.drill.test.framework.TestDriver.main(TestDriver.java:94)
>  Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: 
> SYSTEM ERROR: NullPointerException
> Please, refer to logs for more information.
> [Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
>  at 
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
>  at oadd.org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
>  at oadd.org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
>  at 
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
>  at 
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
>  at 
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> oadd.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
>  at 
> oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
>  at 
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> 

[jira] [Updated] (DRILL-7146) Query failing with NPE when ZK queue is enabled

2019-04-01 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7146:
-
Description: 
 
{code:java}
>> Query: alter system reset all;
 SYSTEM ERROR: NullPointerException
Please, refer to logs for more information.
[Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
 java.sql.SQLException: SYSTEM ERROR: NullPointerException
Please, refer to logs for more information.
[Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
 at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:535)
 at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:607)
 at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1278)
 at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:58)
 at 
oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:667)
 at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1107)
 at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1118)
 at 
oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
 at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:200)
 at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
 at 
oadd.org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
 at org.apache.drill.test.framework.Utils.execSQL(Utils.java:917)
 at org.apache.drill.test.framework.TestDriver.setup(TestDriver.java:632)
 at org.apache.drill.test.framework.TestDriver.runTests(TestDriver.java:152)
 at org.apache.drill.test.framework.TestDriver.main(TestDriver.java:94)
 Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: SYSTEM 
ERROR: NullPointerException
Please, refer to logs for more information.
[Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
 at 
oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
 at oadd.org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
 at oadd.org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
 at oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
 at oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
 at 
oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
 at 
oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
 at 
oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
 at 
oadd.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
 at 
oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
 at 
oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
 at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
 at 
oadd.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
 at 

[jira] [Created] (DRILL-7146) Query failing with NPE when ZK queue is enabled

2019-04-01 Thread Sorabh Hamirwasia (JIRA)
Sorabh Hamirwasia created DRILL-7146:


 Summary: Query failing with NPE when ZK queue is enabled
 Key: DRILL-7146
 URL: https://issues.apache.org/jira/browse/DRILL-7146
 Project: Apache Drill
  Issue Type: Bug
  Components: Query Planning  Optimization
Affects Versions: 1.16.0
Reporter: Sorabh Hamirwasia
Assignee: Hanumath Rao Maduri
 Fix For: 1.16.0


>> Query: alter system reset all;
SYSTEM ERROR: NullPointerException


Please, refer to logs for more information.

[Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
java.sql.SQLException: SYSTEM ERROR: NullPointerException


Please, refer to logs for more information.

[Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:535)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:607)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1278)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:58)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:667)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1107)
at 
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1118)
at 
oadd.org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:200)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at 
oadd.org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
at org.apache.drill.test.framework.Utils.execSQL(Utils.java:917)
at org.apache.drill.test.framework.TestDriver.setup(TestDriver.java:632)
at 
org.apache.drill.test.framework.TestDriver.runTests(TestDriver.java:152)
at org.apache.drill.test.framework.TestDriver.main(TestDriver.java:94)
Caused by: oadd.org.apache.drill.common.exceptions.UserRemoteException: SYSTEM 
ERROR: NullPointerException


Please, refer to logs for more information.

[Error Id: ec4b9c66-9f5c-4736-acf3-605f84ea0226 on drill80:31010]
at 
oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
at 
oadd.org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
at 
oadd.org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
at 
oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
at 
oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
at 
oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
oadd.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at 
oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 

[jira] [Updated] (DRILL-7051) Upgrade to Jetty 9.3

2019-03-28 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7051:
-
Labels: ready-to-commit  (was: )

> Upgrade to Jetty 9.3 
> -
>
> Key: DRILL-7051
> URL: https://issues.apache.org/jira/browse/DRILL-7051
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Web Server
>Affects Versions: 1.15.0
>Reporter: Veera Naranammalpuram
>Assignee: Vitalii Diravka
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Is Drill using a version of jetty web server that's really old? The jar's 
> suggest it's using jetty 9.1 that was built sometime in 2014? 
> {noformat}
> -rw-r--r-- 1 veeranaranammalpuram staff 15988 Nov 20 2017 
> jetty-continuation-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 103288 Nov 20 2017 
> jetty-http-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 101519 Nov 20 2017 
> jetty-io-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 95906 Nov 20 2017 
> jetty-security-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 401593 Nov 20 2017 
> jetty-server-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 110992 Nov 20 2017 
> jetty-servlet-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 119215 Nov 20 2017 
> jetty-servlets-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 341683 Nov 20 2017 
> jetty-util-9.1.5.v20140505.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 38707 Dec 21 15:42 
> jetty-util-ajax-9.3.19.v20170502.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 111466 Nov 20 2017 
> jetty-webapp-9.1.1.v20140108.jar
> -rw-r--r-- 1 veeranaranammalpuram staff 41763 Nov 20 2017 
> jetty-xml-9.1.1.v20140108.jar {noformat}
> This version is shown as deprecated: 
> [https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Opening this to upgrade jetty to the latest stable supported version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7107) Unable to connect to Drill 1.15 through ZK

2019-03-26 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-7107:
-
Component/s: Client - JDBC

> Unable to connect to Drill 1.15 through ZK
> --
>
> Key: DRILL-7107
> URL: https://issues.apache.org/jira/browse/DRILL-7107
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - JDBC
>Reporter: Karthikeyan Manivannan
>Assignee: Karthikeyan Manivannan
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> After upgrading to Drill 1.15, users are seeing they are no longer able to 
> connect to Drill using ZK quorum. They are getting the following "Unable to 
> setup ZK for client" error.
> [~]$ sqlline -u "jdbc:drill:zk=172.16.2.165:5181;auth=maprsasl"
> Error: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. 
> (state=,code=0)
> java.sql.SQLNonTransientConnectionException: Failure in connecting to Drill: 
> org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
>  at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:174)
>  at 
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
>  at 
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
>  at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>  at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
>  at sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
>  at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
>  at sqlline.Commands.connect(Commands.java:1247)
>  at sqlline.Commands.connect(Commands.java:1139)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>  at sqlline.SqlLine.dispatch(SqlLine.java:722)
>  at sqlline.SqlLine.initArgs(SqlLine.java:416)
>  at sqlline.SqlLine.begin(SqlLine.java:514)
>  at sqlline.SqlLine.start(SqlLine.java:264)
>  at sqlline.SqlLine.main(SqlLine.java:195)
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for 
> client.
>  at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:340)
>  at 
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:165)
>  ... 18 more
> Caused by: java.lang.NullPointerException
>  at 
> org.apache.drill.exec.coord.zk.ZKACLProviderFactory.findACLProvider(ZKACLProviderFactory.java:68)
>  at 
> org.apache.drill.exec.coord.zk.ZKACLProviderFactory.getACLProvider(ZKACLProviderFactory.java:47)
>  at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:114)
>  at 
> org.apache.drill.exec.coord.zk.ZKClusterCoordinator.(ZKClusterCoordinator.java:86)
>  at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:337)
>  ... 19 more
> Apache Drill 1.15.0.0
> "This isn't your grandfather's SQL."
> sqlline>
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   >