[jira] [Created] (DRILL-8143) Error querying json with $date field

2022-02-18 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-8143:
---

 Summary: Error querying json with $date field
 Key: DRILL-8143
 URL: https://issues.apache.org/jira/browse/DRILL-8143
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Anton Gozhiy
 Attachments: extended.json

*Test Data:*
extended.json attached.

*Query:*
 # select * from dfs.drillTestDir.`complex/drill-2879/extended.json` where name 
= 'd'

*Expected Results:*
Query successful, no exception should be thrown.

*Actual Result:*
Exception happened:
{noformat}
UserRemoteException :   INTERNAL_ERROR ERROR: Text 
'2015-03-12T21:54:31.809+0530' could not be parsed at index 23

org.apache.drill.common.exceptions.UserRemoteException: INTERNAL_ERROR ERROR: 
Text '2015-03-12T21:54:31.809+0530' could not be parsed at index 23

Fragment: 0:0

Please, refer to logs for more information.

[Error Id: c984adbf-a455-4e0e-b3cd-b5aa7d83a765 on userf87d-pc:31010]

  (java.time.format.DateTimeParseException) Text '2015-03-12T21:54:31.809+0530' 
could not be parsed at index 23
java.time.format.DateTimeFormatter.parseResolved0():2046
java.time.format.DateTimeFormatter.parse():1948
java.time.Instant.parse():395

org.apache.drill.exec.vector.complex.fn.VectorOutput$MapVectorOutput.writeTimestamp():364
org.apache.drill.exec.vector.complex.fn.VectorOutput.innerRun():115

org.apache.drill.exec.vector.complex.fn.VectorOutput$MapVectorOutput.run():308
org.apache.drill.exec.vector.complex.fn.JsonReader.writeMapDataIfTyped():386
org.apache.drill.exec.vector.complex.fn.JsonReader.writeData():262
org.apache.drill.exec.vector.complex.fn.JsonReader.writeDataSwitch():192
org.apache.drill.exec.vector.complex.fn.JsonReader.writeDocument():178

org.apache.drill.exec.store.easy.json.reader.BaseJsonReader.writeToVector():99
org.apache.drill.exec.store.easy.json.reader.BaseJsonReader.write():70
org.apache.drill.exec.store.easy.json.JSONRecordReader.next():234
org.apache.drill.exec.physical.impl.ScanBatch.internalNext():234
org.apache.drill.exec.physical.impl.ScanBatch.next():298
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():111
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85
org.apache.drill.exec.record.AbstractRecordBatch.next():170
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():111
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59
org.apache.drill.exec.record.AbstractRecordBatch.next():170
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():111
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59
org.apache.drill.exec.record.AbstractRecordBatch.next():170
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():111
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85
org.apache.drill.exec.record.AbstractRecordBatch.next():170
org.apache.drill.exec.record.AbstractRecordBatch.next():119
org.apache.drill.exec.record.AbstractRecordBatch.next():111
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():59

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():85
org.apache.drill.exec.record.AbstractRecordBatch.next():170
org.apache.drill.exec.physical.impl.BaseRootExec.next():103
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
org.apache.drill.exec.physical.impl.BaseRootExec.next():93
org.apache.drill.exec.work.fragment.FragmentExecutor.lambda$run$0():321
java.security.AccessController.doPrivileged():-2
javax.security.auth.Subject.doAs():423
org.apache.hadoop.security.UserGroupInformation.doAs():1762
org.apache.drill.exec.work.fragment.FragmentExecutor.run():310
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1128
java.util.concurrent.ThreadPoolExecutor$Worker.run():628
java.lang.Thread.run():834
{noformat}
*Note:* It is not reproducible in Drill 1.19.0



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (DRILL-8120) Make Drill functional tests viable again.

2022-01-31 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-8120:
---

 Summary: Make Drill functional tests viable again.
 Key: DRILL-8120
 URL: https://issues.apache.org/jira/browse/DRILL-8120
 Project: Apache Drill
  Issue Type: Task
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy


There is an external test framework that was used for Drill regression testing 
before:
[https://github.com/mapr/drill-test-framework]
Although it is under mapr domain, it is public and licensed under the Apache 
License 2.0 so it can be used again.

*Problems need to be solved to make it work:*
 # Environment. It used to run on a quite powerful physical cluster with HDFS 
and Drill, with static configuration and reusable test data. This makes the 
framework inflexible, end even if you have suitable env it may still require 
some amount of manual tuning. Possible solution: wrap it up into a docker 
container to make it platform independent and minimize the effort to set it up.
 # Tests were not updated for 2 years so they need to be brought up to date. It 
can be done step by step, fixing some test suites, removing or disabling those 
that are not needed anymore.
 # Test pipeline. After first two paragraphs are resolved, a CI tool can be 
used to run tests regularly.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (DRILL-7749) Drill-on-Yarn Application Master UI is broken due to bootstrap update

2020-06-17 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7749:
---

 Summary: Drill-on-Yarn Application Master UI is broken due to 
bootstrap update
 Key: DRILL-7749
 URL: https://issues.apache.org/jira/browse/DRILL-7749
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7705) Update jQuery and Bootstrap libraries

2020-04-17 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7705:
---

 Summary: Update jQuery and Bootstrap libraries
 Key: DRILL-7705
 URL: https://issues.apache.org/jira/browse/DRILL-7705
 Project: Apache Drill
  Issue Type: Improvement
Affects Versions: 1.17.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy
 Fix For: 1.18.0


There are some vulnerabilities present in jQuery and Bootstrap libraries used 
in Drill:
* jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, 
mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. 
If an unsanitized source object contained an enumerable __proto__ property, it 
could extend the native Object.prototype.
* In Bootstrap before 4.1.2, XSS is possible in the collapse data-parent 
attribute.
* In Bootstrap before 4.1.2, XSS is possible in the data-container property of 
tooltip.
* In Bootstrap before 3.4.0, XSS is possible in the affix configuration target 
property.
* In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the 
tooltip or popover data-template attribute.

The following update is suggested to fix them:
* jQuery: 3.2.1 -> 3.5.0
* Bootstrap: 3.1.1 -> 4.4.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7700) Queries to sys schema hang if "use dfs;" was executed before

2020-04-14 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7700:
---

 Summary: Queries to sys schema hang if "use dfs;" was executed 
before
 Key: DRILL-7700
 URL: https://issues.apache.org/jira/browse/DRILL-7700
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy


*Steps:*
# Connect to Drill by sqlline
# Run query "use dfs;"
# Run query "select * from sys.drillbits;"

*Expected result:* The query should be executed successfully.

*Actual result:* The query hangs on planning stage.

*Note:* The issue is reproduced with Drill built with "mapr" profile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7693) Upgrade Protobuf to 3.11.1

2020-04-09 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7693:
---

 Summary: Upgrade Protobuf to 3.11.1
 Key: DRILL-7693
 URL: https://issues.apache.org/jira/browse/DRILL-7693
 Project: Apache Drill
  Issue Type: Improvement
Affects Versions: 1.17.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7647) Drill Web server doesn't work with TLS protocol version 1.1

2020-03-17 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7647:
---

 Summary: Drill Web server doesn't work with TLS protocol version 
1.1
 Key: DRILL-7647
 URL: https://issues.apache.org/jira/browse/DRILL-7647
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: Anton Gozhiy


*Prerequisites:*
# Set the following config options:
* drill.exec.http.ssl_enabled: true
* drill.exec.ssl.protocol: "*TLSv1.1*"
* Also:
** drill.exec.ssl.trustStorePath
** drill.exec.ssl.trustStorePassword
** drill.exec.ssl.keyStorePath
** keyStorePassword
* Or, if on MapR platform: 
** drill.exec.ssl.useMapRSSLConfig: true

*Steps:*
# Start Drill
# Try to open the Web UI
# Try to connect by an ssl client:
{noformat}
openssl s_client -connect node1.cluster.com:8047 -tls1_1
{noformat}

*Expected result:*
It should accept protocol version v1.1

*Actual results:*
* Cannot open the Web UI:
{noformat}
This site can't provide a secure connection
node1.cluster.com uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
{noformat}
* Openssl client fails to connect using either v1.1 or v1.2 protocols.
{noformat}
$ openssl s_client -connect node1.cluster.com:8047 -tls1_1
CONNECTED(0003)
140310139057816:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert 
handshake failure:s3_pkt.c:1487:SSL alert number 40
140310139057816:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake 
failure:s3_pkt.c:656:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol  : TLSv1.1
Cipher: 
Session-ID: 
Session-ID-ctx: 
Master-Key: 
Key-Arg   : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1584457371
Timeout   : 7200 (sec)
Verify return code: 0 (ok)
---
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7637) Add an option to retrieve MapR SSL truststore/keystore credentials using MapR Web Security Manager

2020-03-11 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7637:
---

 Summary: Add an option to retrieve MapR SSL truststore/keystore 
credentials using MapR Web Security Manager
 Key: DRILL-7637
 URL: https://issues.apache.org/jira/browse/DRILL-7637
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy
 Fix For: 1.18.0


If Drill is built with mapr profile and "useMapRSSLConfig" option is set to 
true, then it will use MapR Web Security Manager to retrieve SSL credentials.
 Example usage:
 - Add an option to Drill config:
{noformat}
drill.exec.ssl.useMapRSSLConfig: true.
{noformat}

 - Connect by sqlline:
{noformat}
./bin/sqlline -u 
"jdbc:drill:drillbit=node1.cluster.com:31010;enableTLS=true;useMapRSSLConf=true"
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7623) Link error is displayed at the log content page on Web UI

2020-03-04 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7623:
---

 Summary: Link error is displayed at the log content page on Web UI
 Key: DRILL-7623
 URL: https://issues.apache.org/jira/browse/DRILL-7623
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy


*Steps:*
# Open a log file from the Web UI:
{noformat}
/log/sqlline.log/content
{noformat}

*Expected result:*
There should be no errors.

*Actual result:*
{noformat}
GET http://localhost:8047/log/static/js/jquery-3.2.1.min.js net::ERR_ABORTED 
500 (Internal Server Error)
bootstrap.min.js:6 Uncaught Error: Bootstrap's JavaScript requires jQuery
at bootstrap.min.js:6
(anonymous) @   bootstrap.min.js:6
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7620) Storage plugin update page shows that a plugin is disabled though it is actually enabled.

2020-03-03 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7620:
---

 Summary: Storage plugin update page shows that a plugin is 
disabled though it is actually enabled.
 Key: DRILL-7620
 URL: https://issues.apache.org/jira/browse/DRILL-7620
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: Anton Gozhiy
Assignee: Paul Rogers


*Steps to reproduce:*
# On Web UI, open storage page
# Disable some plugin (e.g. "cp")
# Enable this plugin (It is displayed in "enabled" section now)
# Update the plugin, look at the "enabled" property

*Expected result:*
"enabled": true

*Actual result:*
"enabled": false

*Note:* Though it is displayed as disabled in the config, queries to it are 
working.

*Workaround:* Enable it again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7619) Metrics is not displayed due to incorrect endpoint link on the Drill index page

2020-03-03 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7619:
---

 Summary: Metrics is not displayed due to incorrect endpoint link 
on the Drill index page
 Key: DRILL-7619
 URL: https://issues.apache.org/jira/browse/DRILL-7619
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy
 Fix For: 1.18.0


Should be /status/metrics/\{hostname} instead of /status/\{hostname}/metrics



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7582) Drill docker Web UI doesn't show resources usage information if map the container to a non-default port

2020-02-13 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7582:
---

 Summary: Drill docker Web UI doesn't show resources usage 
information if map the container to a non-default port
 Key: DRILL-7582
 URL: https://issues.apache.org/jira/browse/DRILL-7582
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy


*Steps:*
# Run Drill docker container with non-default port published:
{noformat}
$ docker container run -it --rm -p 9047:8047 apache/drill
{noformat}
# Open Drill Web UI (localhost:9047)

*Expected result:*
The following fields should contain relevant information:
* Heap Memory Usage
* Direct Memory Usage
* CPU Usage
* Avg Sys Load
* Uptime

*Actual result:*
"Not Available" is displayed.

*Note:* if publish the default port (-p 8047:8047), everything is showed 
correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7520) Cannot connect to Drill with PLAIN authentication enabled using JDBC client

2020-01-08 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7520:
---

 Summary: Cannot connect to Drill with PLAIN authentication enabled 
using JDBC client
 Key: DRILL-7520
 URL: https://issues.apache.org/jira/browse/DRILL-7520
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy


*Prerequisites:*
# Drill with the JDBC driver is built with "mapr" profile
# Security is enabled and PLAIN authentication is configured

*Steps:*
# Use some external JDBC client to connect (e.g. DBeaver)
# Connection string: "jdbc:drill:drillbit=node1:31010"
# Set appropriate user/password
# Test Connection

*Expected result:*
Connection successful.

*Actual result:*
Exception happens:
{noformat}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
Exception in thread "main" java.sql.SQLNonTransientConnectionException: Failure 
in connecting to Drill: oadd.org.apache.drill.exec.rpc.RpcException: 
HANDSHAKE_VALIDATION : org/apache/hadoop/conf/Configuration
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:178)
at 
org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
at 
org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
at 
oadd.org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
at org.apache.drill.jdbc.Driver.connect(Driver.java:75)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at TheBestClientEver.main(TheBestClientEver.java:28)
Caused by: oadd.org.apache.drill.exec.rpc.RpcException: HANDSHAKE_VALIDATION : 
org/apache/hadoop/conf/Configuration
at 
oadd.org.apache.drill.exec.rpc.user.UserClient$2.connectionFailed(UserClient.java:315)
at 
oadd.org.apache.drill.exec.rpc.user.QueryResultHandler$ChannelClosedHandler.connectionFailed(QueryResultHandler.java:396)
at 
oadd.org.apache.drill.exec.rpc.ConnectionMultiListener$HandshakeSendHandler.success(ConnectionMultiListener.java:170)
at 
oadd.org.apache.drill.exec.rpc.ConnectionMultiListener$HandshakeSendHandler.success(ConnectionMultiListener.java:143)
at 
oadd.org.apache.drill.exec.rpc.RequestIdMap$RpcListener.set(RequestIdMap.java:134)
at 
oadd.org.apache.drill.exec.rpc.BasicClient$ClientHandshakeHandler.consumeHandshake(BasicClient.java:318)
at 
oadd.org.apache.drill.exec.rpc.AbstractHandshakeHandler.decode(AbstractHandshakeHandler.java:57)
at 
oadd.org.apache.drill.exec.rpc.AbstractHandshakeHandler.decode(AbstractHandshakeHandler.java:29)
at 
oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
oadd.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at 
oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 

[jira] [Created] (DRILL-7465) Revise the approach of handling dependencies with incompatible versions

2019-12-04 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7465:
---

 Summary: Revise the approach of handling dependencies with 
incompatible versions
 Key: DRILL-7465
 URL: https://issues.apache.org/jira/browse/DRILL-7465
 Project: Apache Drill
  Issue Type: Task
Reporter: Anton Gozhiy


In continuation of the conversation started 
[here|https://github.com/apache/drill/pull/1910].
Transitive dependencies with different versions is a common problem. The first 
and obvious solution would be to add them to Dependency Management in a 
pom.xml, and if backward compatibility is preserved it will work. But often 
there are changes in API, especially when major versions are different.

Current approaches used in Drill to handle this situation:
 * Using Maven Shade plugin
 ** Pros:
 *** Solves the problem, as libraries use their target dependency version.
 ** Cons:
 *** Requires a lot of changes in code and some tricky work bringing all 
component libraries together and relocating them.
 *** Will probably increase the jar size.
 * Patching conflicting classes with Javassist (Guava and Protobuf)
 ** Pros:
 *** Easier than shading to implement.
 *** Only one dependency version is used.
 ** Cons:
 *** This is a dark magic )
 *** It is hard to find all places that need a patch. This may cause some ugly 
exceptions when you least expect them.
 *** Often needs rework after the library version upgrade.
 *** For this to work, patching must happen in the first place. This can 
theoretically cause race conditions.
 *** Extending the previous paragraph, patching is need to be done before all 
tests so they all should be inherited from one with patching. This is obviously 
an overhead.

The idea of this task is to stop using patching at all due to it's cons and 
change it either to shading or some other approach if it exists.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7440) Failure during loading of RepeatedCount functions

2019-11-06 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7440:
---

 Summary: Failure during loading of RepeatedCount functions
 Key: DRILL-7440
 URL: https://issues.apache.org/jira/browse/DRILL-7440
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy


*Steps:*
# Start Drillbit
# Look at the drillbit.log

*Expected result:* No exceptions should be present.

*Actual result:*
Null Pointer Exceptions occur:
{noformat}
2019-11-06 03:06:40,401 [main] WARN  o.a.d.exec.expr.fn.FunctionConverter - 
Failure loading function class 
org.apache.drill.exec.expr.fn.impl.RepeatedCountFunctions$RepeatedCountRepeatedDict,
 field input. Message: Failure while trying to access the ValueHolder's TYPE 
static variable.  All ValueHolders must contain a static TYPE variable that 
defines their MajorType.
java.lang.NullPointerException: null
at 
sun.reflect.UnsafeFieldAccessorImpl.ensureObj(UnsafeFieldAccessorImpl.java:57) 
~[na:1.8.0_171]
at 
sun.reflect.UnsafeObjectFieldAccessorImpl.get(UnsafeObjectFieldAccessorImpl.java:36)
 ~[na:1.8.0_171]
at java.lang.reflect.Field.get(Field.java:393) ~[na:1.8.0_171]
at 
org.apache.drill.exec.expr.fn.FunctionConverter.getStaticFieldValue(FunctionConverter.java:220)
 ~[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.FunctionConverter.getHolder(FunctionConverter.java:136)
 ~[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.registry.LocalFunctionRegistry.validate(LocalFunctionRegistry.java:130)
 [drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.registry.LocalFunctionRegistry.(LocalFunctionRegistry.java:88)
 [drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.FunctionImplementationRegistry.(FunctionImplementationRegistry.java:113)
 [drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.server.DrillbitContext.(DrillbitContext.java:118) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.work.WorkManager.start(WorkManager.java:116) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:222) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:581) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:551) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:547) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
2019-11-06 03:06:40,402 [main] WARN  o.a.d.e.e.f.r.LocalFunctionRegistry - 
Unable to initialize function for class 
org.apache.drill.exec.expr.fn.impl.RepeatedCountFunctions$RepeatedCountRepeatedDict
2019-11-06 03:06:40,487 [main] WARN  o.a.d.exec.expr.fn.FunctionConverter - 
Failure loading function class 
org.apache.drill.exec.expr.fn.impl.gaggr.CountFunctions$RepeatedDictCountFunction,
 field in. Message: Failure while trying to access the ValueHolder's TYPE 
static variable.  All ValueHolders must contain a static TYPE variable that 
defines their MajorType.
java.lang.NullPointerException: null
at 
sun.reflect.UnsafeFieldAccessorImpl.ensureObj(UnsafeFieldAccessorImpl.java:57) 
~[na:1.8.0_171]
at 
sun.reflect.UnsafeObjectFieldAccessorImpl.get(UnsafeObjectFieldAccessorImpl.java:36)
 ~[na:1.8.0_171]
at java.lang.reflect.Field.get(Field.java:393) ~[na:1.8.0_171]
at 
org.apache.drill.exec.expr.fn.FunctionConverter.getStaticFieldValue(FunctionConverter.java:220)
 ~[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.FunctionConverter.getHolder(FunctionConverter.java:136)
 ~[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.registry.LocalFunctionRegistry.validate(LocalFunctionRegistry.java:130)
 [drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.registry.LocalFunctionRegistry.(LocalFunctionRegistry.java:88)
 [drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.expr.fn.FunctionImplementationRegistry.(FunctionImplementationRegistry.java:113)
 [drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 
org.apache.drill.exec.server.DrillbitContext.(DrillbitContext.java:118) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.work.WorkManager.start(WorkManager.java:116) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:222) 
[drill-java-exec-1.17.0-SNAPSHOT.jar:1.17.0-SNAPSHOT]
at 

[jira] [Created] (DRILL-7429) Wrong column order when selecting complex data using Hive storage plugin.

2019-10-30 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7429:
---

 Summary: Wrong column order when selecting complex data using Hive 
storage plugin.
 Key: DRILL-7429
 URL: https://issues.apache.org/jira/browse/DRILL-7429
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy
 Attachments: customer_complex.zip

*Data:*
customer_complex.zip attached

*Query:*
{code:sql}
select t3.a, t3.b from (select t2.a, t2.a.o_lineitems[1].l_part.p_name b from 
(select t1.c_orders[0] a from hive.customer_complex t1) t2) t3 limit 1
{code}

*Expected result:*
Column order: a, b

*Actual result:*
Column order: b, a

*Physical plan:*
{noformat}
00-00Screen
00-01  Project(a=[ROW($0, $1, $2, $3, $4, $5, $6, $7)], b=[$8])
00-02Project(a=[ITEM($0, 0).o_orderstatus], a1=[ITEM($0, 
0).o_totalprice], a2=[ITEM($0, 0).o_orderdate], a3=[ITEM($0, 
0).o_orderpriority], a4=[ITEM($0, 0).o_clerk], a5=[ITEM($0, 0).o_shippriority], 
a6=[ITEM($0, 0).o_comment], a7=[ITEM($0, 0).o_lineitems], 
b=[ITEM(ITEM(ITEM(ITEM($0, 0).o_lineitems, 1), 'l_part'), 'p_name')])
00-03  Project(c_orders=[$0])
00-04SelectionVectorRemover
00-05  Limit(fetch=[10])
00-06Scan(table=[[hive, customer_complex]], 
groupscan=[HiveDrillNativeParquetScan [entries=[ReadEntryWithPath 
[path=/drill/customer_complex/00_0]], numFiles=1, numRowGroups=1, 
columns=[`c_orders`[0].`o_orderstatus`, `c_orders`[0].`o_totalprice`, 
`c_orders`[0].`o_orderdate`, `c_orders`[0].`o_orderpriority`, 
`c_orders`[0].`o_clerk`, `c_orders`[0].`o_shippriority`, 
`c_orders`[0].`o_comment`, `c_orders`[0].`o_lineitems`, 
`c_orders`[0].`o_lineitems`[1].`l_part`.`p_name`]]])
{noformat}

*Note:* Reproduced with both Hive and Native readers. Non-reproducible with 
Parquet reader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7381) Query to a map field returns nulls with hive native reader

2019-09-19 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7381:
---

 Summary: Query to a map field returns nulls with hive native reader
 Key: DRILL-7381
 URL: https://issues.apache.org/jira/browse/DRILL-7381
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy
 Attachments: customer_complex.zip

*Query:*
{code:sql}
select t.c_nation.n_region.r_name from hive.customer_complex t limit 5
{code}

*Expected results:*
{noformat}
AFRICA
MIDDLE EAST
AMERICA
MIDDLE EAST
AMERICA
{noformat}

*Actual results:*
{noformat}
null
null
null
null
null
{noformat}

*Workaround:*

{code:sql}
set store.hive.optimize_scan_with_native_readers = false;
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DRILL-7380) Query of a field inside of an array of structs returns null

2019-09-19 Thread Anton Gozhiy (Jira)
Anton Gozhiy created DRILL-7380:
---

 Summary: Query of a field inside of an array of structs returns 
null
 Key: DRILL-7380
 URL: https://issues.apache.org/jira/browse/DRILL-7380
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: Anton Gozhiy
 Attachments: customer_complex.zip

*Query:*
{code:sql}
select t.c_orders[0].o_orderstatus from hive.customer_complex t limit 10;
{code}

*Expected results (given from Hive):*
{noformat}
OK
O
F
NULL
O
O
NULL
O
O
NULL
F
{noformat}

*Actual results:*
{noformat}
null
null
null
null
null
null
null
null
null
null
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (DRILL-7020) big varchar doesn't work with extractHeader=true

2019-06-11 Thread Anton Gozhiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Gozhiy resolved DRILL-7020.
-
Resolution: Duplicate

> big varchar doesn't work with extractHeader=true
> 
>
> Key: DRILL-7020
> URL: https://issues.apache.org/jira/browse/DRILL-7020
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Text  CSV
>Affects Versions: 1.15.0
>Reporter: benj
>Priority: Major
>
> with a TEST file of csv type like
> {code:java}
> col1,col2
> w,x
> ...y...,z
> {code}
> where ...y... is > 65536 characters string (let say 66000 for example)
> SELECT with +*extractHeader=false*+ are OK
> {code:java}
> SELECT * FROM TABLE(tmp.`TEST`(type => 'text', fieldDelimiter => ',', 
> extractHeader => false));
>     col1  | col2
> +-+--
> | w       | x
> | ...y... | z
> {code}
> But SELECT with +*extractHeader=true*+ gives an error
> {code:java}
> SELECT * FROM TABLE(tmp.`TEST`(type => 'text', fieldDelimiter => ',', 
> extractHeader => true));
> Error: UNSUPPORTED_OPERATION ERROR: Trying to write something big in a column
> columnIndex 1
> Limit 65536
> Fragment 0:0
> {code}
> Note that is possible to use extractHeader=false with skipFirstLine=true but 
> in this case it's not possible to automatically get columns names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7286) Joining a table with itself using subquery results in exception.

2019-06-06 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7286:
---

 Summary: Joining a table with itself using subquery results in 
exception.
 Key: DRILL-7286
 URL: https://issues.apache.org/jira/browse/DRILL-7286
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy


*Steps:*
# Create some test table, like:
{code:sql}
create table t as select * from cp.`employee.json`;
{code}
# Execute the query:
{code:sql}
select * from (select * from t) d1 join t d2 on d1.employee_id = d2.employee_id 
limit 1;
{code}

*Expected result:*
A result should be returned normally.

*Actual result:*
Exception happened:
{noformat}
Error: SYSTEM ERROR: IndexOutOfBoundsException: index (2) must be less than 
size (2)


Please, refer to logs for more information.

[Error Id: 92a5ce8e-8640-4636-a897-8f360ddf8ea3 on userf87d-pc:31010]

  (org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
during fragment initialization: index (2) must be less than size (2)
org.apache.drill.exec.work.foreman.Foreman.run():305
java.util.concurrent.ThreadPoolExecutor.runWorker():1149
java.util.concurrent.ThreadPoolExecutor$Worker.run():624
java.lang.Thread.run():748
  Caused By (java.lang.IndexOutOfBoundsException) index (2) must be less than 
size (2)
com.google.common.base.Preconditions.checkElementIndex():310
com.google.common.base.Preconditions.checkElementIndex():293
com.google.common.collect.RegularImmutableList.get():67
org.apache.calcite.util.Pair$3.get():295

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitProject():163

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitProject():44
org.apache.drill.exec.planner.physical.ProjectPrel.accept():105

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():196

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():44

org.apache.drill.exec.planner.physical.visitor.BasePrelVisitor.visitJoin():51
org.apache.drill.exec.planner.physical.JoinPrel.accept():71

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():196

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():44
org.apache.drill.exec.planner.physical.LimitPrel.accept():88

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():196

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():44

org.apache.drill.exec.planner.physical.visitor.BasePrelVisitor.visitExchange():46
org.apache.drill.exec.planner.physical.ExchangePrel.accept():36

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():196

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitPrel():44
org.apache.drill.exec.planner.physical.LimitPrel.accept():88

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitProject():157

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitProject():44
org.apache.drill.exec.planner.physical.ProjectPrel.accept():105

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitScreen():76

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.visitScreen():44
org.apache.drill.exec.planner.physical.ScreenPrel.accept():65

org.apache.drill.exec.planner.physical.visitor.StarColumnConverter.insertRenameProject():71

org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel():513
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():178
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():226
org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan():124
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():90
org.apache.drill.exec.work.foreman.Foreman.runSQL():593
org.apache.drill.exec.work.foreman.Foreman.run():276
java.util.concurrent.ThreadPoolExecutor.runWorker():1149
java.util.concurrent.ThreadPoolExecutor$Worker.run():624
java.lang.Thread.run():748 (state=,code=0)
{noformat}

*Note:* The same query without subquery works fine:
{code:sql}
select * from t d1 join t d2 on d1.employee_id = d2.employee_id limit 1;
{code}
{noformat}
+-+--++---+-++--+---++---+-+---+-+++---+--+--+-++--+-+---++-+---+-++--+-+-+---+
| employee_id |  full_name   | first_name | last_name | position_id | 
position_title | 

[jira] [Created] (DRILL-7285) A temporary table has a higher priority than the cte table with the same name.

2019-06-06 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7285:
---

 Summary: A temporary table has a higher priority than the cte 
table with the same name.
 Key: DRILL-7285
 URL: https://issues.apache.org/jira/browse/DRILL-7285
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy


*Steps:*
# Switch to a workspace:
{code:sql}
use dfs.tmp
{code}
# Create a temporary table:
{code:sql}
create temporary table t as select 'temp table' as a;
{code}
# Run the following query:
{code:sql}
with t as (select 'cte' as a) select * from t;
{code}

*Expected result:* content from the CTE table should be returned:
{noformat}
++
| a  |
++
|cte |
++
{noformat}

*Actual result:* the temporary table content is returned instead:
{noformat}
++
| a  |
++
| temp table |
++
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7258) [Text V3 Reader] Unsupported operation error is thrown when select a column with a long string

2019-05-14 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7258:
---

 Summary: [Text V3 Reader] Unsupported operation error is thrown 
when select a column with a long string
 Key: DRILL-7258
 URL: https://issues.apache.org/jira/browse/DRILL-7258
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy


*Data:*
10.tbl is attached

*Steps:*
# Set exec.storage.enable_v3_text_reader=true
# Run the following query:
{code:sql}
select * from dfs.`/tmp/drill/data/10.tbl`
{code}

*Expected result:*
The query should return result normally.

*Actual result:*
Exception is thrown:
{noformat}
UNSUPPORTED_OPERATION ERROR: Drill Remote Exception



  (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: Text column is too large.

Column 0
Limit 65536
Fragment 0:0

[Error Id: 5f73232f-f0c0-48aa-ab0f-b5f86495d3c8 on userf87d-pc:31010]
org.apache.drill.common.exceptions.UserException$Builder.build():630

org.apache.drill.exec.store.easy.text.compliant.v3.BaseFieldOutput.append():131

org.apache.drill.exec.store.easy.text.compliant.v3.TextReader.parseValueAll():208

org.apache.drill.exec.store.easy.text.compliant.v3.TextReader.parseValue():225

org.apache.drill.exec.store.easy.text.compliant.v3.TextReader.parseField():341

org.apache.drill.exec.store.easy.text.compliant.v3.TextReader.parseRecord():137

org.apache.drill.exec.store.easy.text.compliant.v3.TextReader.parseNext():388

org.apache.drill.exec.store.easy.text.compliant.v3.CompliantTextBatchReader.next():220

org.apache.drill.exec.physical.impl.scan.framework.ShimBatchReader.next():132
org.apache.drill.exec.physical.impl.scan.ReaderState.readBatch():397
org.apache.drill.exec.physical.impl.scan.ReaderState.next():354
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.nextAction():184
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.next():159
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.doNext():176
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.next():114
org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch.next():147
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.physical.impl.BaseRootExec.next():104
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
org.apache.drill.exec.physical.impl.BaseRootExec.next():94
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
...():0
org.apache.hadoop.security.UserGroupInformation.doAs():1746
org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
org.apache.drill.common.SelfCleaningRunnable.run():38
...():0
{noformat}

*Note:* works fine with v2 reader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7257) [Text V3 Reader] dir0 is empty if a column filter returns all lines.

2019-05-14 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7257:
---

 Summary: [Text V3 Reader] dir0 is empty if a column filter returns 
all lines.
 Key: DRILL-7257
 URL: https://issues.apache.org/jira/browse/DRILL-7257
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy
 Attachments: lineitempart.zip

*Data:*
Unzip the attached archive: lineitempart.zip.

*Steps:*
# Set exec.storage.enable_v3_text_reader=true
# Run the following query:
{code:sql}
select columns[0], dir0 from dfs.tmp.`/drill/data/lineitempart` where dir0=1994 
and columns[0]>29766 order by columns[0] limit 1;
{code}

*Expected result:*
{noformat}
++--+
| EXPR$0 | dir0 |
++--+
| 29767  | 1994 |
++--+
{noformat}

*Actual result:*
{noformat}
++--+
| EXPR$0 | dir0 |
++--+
| 29767  |  |
++--+
{noformat}

*Note:* If change filter a bit so it doesn't return all lines, everything is ok:
{noformat}
apache drill> select columns[0], dir0 from dfs.tmp.`/drill/data/lineitempart` 
where dir0=1994 and columns[0]>29767 order by columns[0] limit 1;
++--+
| EXPR$0 | dir0 |
++--+
| 29792  | 1994 |
++--+
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7216) Auto limit is happening on the Drill Web-UI while the limit checkbox is unchecked

2019-04-25 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7216:
---

 Summary: Auto limit is happening on the Drill Web-UI while the 
limit checkbox is unchecked
 Key: DRILL-7216
 URL: https://issues.apache.org/jira/browse/DRILL-7216
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy


*Steps:*
# On Web-UI, open Query page
# Set "Limit results to" 10, but *not check the checkbox*
# Submit a simple query:
{code:sql}
SELECT * FROM cp.`employee.json` LIMIT 20
{code}

*Expected result:*
Results should be limited be 20 (so as set in the query).

*Actual result:*
Auto limit is applied and the results are limited to 10 rows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7208) Drill commit is no showed if build Drill from the release sources.

2019-04-24 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7208:
---

 Summary: Drill commit is no showed if build Drill from the release 
sources.
 Key: DRILL-7208
 URL: https://issues.apache.org/jira/browse/DRILL-7208
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy


*Steps:*
 # Download the rc1 sources tarball:
 
[apache-drill-1.16.0-src.tar.gz|http://home.apache.org/~sorabh/drill/releases/1.16.0/rc1/apache-drill-1.16.0-src.tar.gz]
 # Unpack
 # Build:
{noformat}
mvn clean install -DskipTests
{noformat}

 # Start Drill in embedded mode:
{noformat}
Linux:
distribution/target/apache-drill-1.16.0/apache-drill-1.16.0/bin/drill-embedded
Windows:
distribution\target\apache-drill-1.16.0\apache-drill-1.16.0\bin\sqlline.bat -u 
"jdbc:drill:zk=local"
{noformat}

 # Run the query:
{code:sql}
select * from sys.version;
{code}

*Expected result:*
 Drill version, commit_id, commit_message, commit_time, build_email, build_time 
should be correctly displayed.

*Actual result:*
{noformat}
apache drill> select * from sys.version;
+-+---++-+-++
| version | commit_id | commit_message | commit_time | build_email | build_time 
|
+-+---++-+-++
| 1.16.0  | Unknown   || | Unknown |
|
+-+---++-+-++
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7186) Missing storage.json REST endpoint.

2019-04-19 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7186:
---

 Summary: Missing storage.json REST endpoint.
 Key: DRILL-7186
 URL: https://issues.apache.org/jira/browse/DRILL-7186
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy


*Steps:*
1. Open page: http://:8047/storage.json

*Expected result:*
storage.json is opened

*Actual result:*
{noformat}
{
  "errorMessage" : "HTTP 404 Not Found"
}
{noformat}

*Note:* Works fine for individual plugin pages, like: /storage/dfs.json



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7181) [Text V3 Reader] Exception with inadequate message is thrown if select columns as array with extractHeader set to true

2019-04-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7181:
---

 Summary: [Text V3 Reader] Exception with inadequate message is 
thrown if select columns as array with extractHeader set to true
 Key: DRILL-7181
 URL: https://issues.apache.org/jira/browse/DRILL-7181
 Project: Apache Drill
  Issue Type: Bug
Reporter: Anton Gozhiy


*Prerequisites:*
# Create a simple .csv file with header, like this:
{noformat}
col1,col2,col3
1,2,3
4,5,6
7,8,9
{noformat}
# exec.storage.enable_v3_text_reader
# Set "extractHeader": true for csv format in dfs storage plugin.

*Query:*
{code:sql}
select columns[0] from dfs.tmp.`/test.csv`
{code}

*Expected result:* Exception should happen, here is the message from V2 reader:
{noformat}
UNSUPPORTED_OPERATION ERROR: Drill Remote Exception



  (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: With extractHeader 
enabled, only header names are supported

column name columns
column index
Fragment 0:0

[Error Id: 5affa696-1dbd-43d7-ac14-72d235c00f43 on userf87d-pc:31010]
org.apache.drill.common.exceptions.UserException$Builder.build():630

org.apache.drill.exec.store.easy.text.compliant.FieldVarCharOutput.():106

org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.setup():139
org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():321
org.apache.drill.exec.physical.impl.ScanBatch.internalNext():216
org.apache.drill.exec.physical.impl.ScanBatch.next():271
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.physical.impl.BaseRootExec.next():104
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
org.apache.drill.exec.physical.impl.BaseRootExec.next():94
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
...():0
org.apache.hadoop.security.UserGroupInformation.doAs():1746
org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
org.apache.drill.common.SelfCleaningRunnable.run():38
...():0
{noformat}

*Actual result:* The exception message is inadequate:
{noformat}
org.apache.drill.common.exceptions.UserRemoteException: EXECUTION_ERROR ERROR: 
Table schema must have exactly one column.

Exception thrown from org.apache.drill.exec.physical.impl.scan.ScanOperatorExec
Fragment 0:0

[Error Id: a76a1576-419a-413f-840f-088157167a6d on userf87d-pc:31010]

  (java.lang.IllegalStateException) Table schema must have exactly one column.

org.apache.drill.exec.physical.impl.scan.columns.ColumnsArrayManager.resolveColumn():108

org.apache.drill.exec.physical.impl.scan.project.ReaderLevelProjection.resolveSpecial():91

org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.resolveRootTuple():62

org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.():52

org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.doExplicitProjection():223

org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.reviseOutputProjection():155

org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.endBatch():117

org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.defineSchema():94

org.apache.drill.exec.physical.impl.scan.framework.ShimBatchReader.defineSchema():105
org.apache.drill.exec.physical.impl.scan.ReaderState.buildSchema():300
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.nextAction():182
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.buildSchema():122

[jira] [Created] (DRILL-7145) Exceptions happened during retrieving values from ValueVector are not being displayed at the Drill Web UI

2019-04-01 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7145:
---

 Summary: Exceptions happened during retrieving values from 
ValueVector are not being displayed at the Drill Web UI
 Key: DRILL-7145
 URL: https://issues.apache.org/jira/browse/DRILL-7145
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy
 Fix For: 1.16.0


*Data:*
A text file with the following content:
{noformat}
Id,col1,col2
1,aaa,bbb
2,ccc,ddd
3,eee
4,fff,ggg
{noformat}
Note that the record with id 3 has not value for the third column.

exec.storage.enable_v3_text_reader should be false.

*Submit the query from the Web UI:*
{code:sql}
select * from 
table(dfs.tmp.`/drill/text/test`(type=>'text',lineDelimiter=>'\n',fieldDelimiter=>',',extractHeader=>true))
{code}

*Expected result:*
Exception should happen due to DRILL-4814. It should be properly displayed.

*Actual result:*
Incorrect data is returned but without error. Query status: success.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7105) Error while building the Drill native client

2019-03-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7105:
---

 Summary: Error while building the Drill native client
 Key: DRILL-7105
 URL: https://issues.apache.org/jira/browse/DRILL-7105
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy
 Fix For: 1.16.0


*Steps:*
# cd contrib/native/client
# mkdir build
# cd build && cmake -std=c++11 -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug ..
# make

*Expected result:*
The native client is built successfully.

*Actual result:*
Error happens:
 
{noformat}
[  4%] Built target y2038
[  7%] Building CXX object 
src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o
In file included from /usr/include/c++/5/mutex:35:0,
 from /usr/local/include/google/protobuf/stubs/mutex.h:33,
 from /usr/local/include/google/protobuf/stubs/common.h:52,
 from 
/home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
 from 
/home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires 
compiler and library support for the ISO C++ 2011 standard. This support must 
be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
In file included from /usr/local/include/google/protobuf/stubs/common.h:52:0,
 from 
/home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:9,
 from 
/home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
/usr/local/include/google/protobuf/stubs/mutex.h:58:8: error: 'mutex' in 
namespace 'std' does not name a type
   std::mutex mu_;
^
/usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
google::protobuf::internal::WrappedMutex::Lock()':
/usr/local/include/google/protobuf/stubs/mutex.h:51:17: error: 'mu_' was not 
declared in this scope
   void Lock() { mu_.lock(); }
 ^
/usr/local/include/google/protobuf/stubs/mutex.h: In member function 'void 
google::protobuf::internal::WrappedMutex::Unlock()':
/usr/local/include/google/protobuf/stubs/mutex.h:52:19: error: 'mu_' was not 
declared in this scope
   void Unlock() { mu_.unlock(); }
   ^
/usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
/usr/local/include/google/protobuf/stubs/mutex.h:61:7: error: expected 
nested-name-specifier before 'Mutex'
 using Mutex = WrappedMutex;
   ^
/usr/local/include/google/protobuf/stubs/mutex.h:66:28: error: expected ')' 
before '*' token
   explicit MutexLock(Mutex *mu) : mu_(mu) { this->mu_->Lock(); }
^
/usr/local/include/google/protobuf/stubs/mutex.h:69:3: error: 'Mutex' does not 
name a type
   Mutex *const mu_;
   ^
/usr/local/include/google/protobuf/stubs/mutex.h: In destructor 
'google::protobuf::internal::MutexLock::~MutexLock()':
/usr/local/include/google/protobuf/stubs/mutex.h:67:24: error: 'class 
google::protobuf::internal::MutexLock' has no member named 'mu_'
   ~MutexLock() { this->mu_->Unlock(); }
^
/usr/local/include/google/protobuf/stubs/mutex.h: At global scope:
/usr/local/include/google/protobuf/stubs/mutex.h:80:33: error: expected ')' 
before '*' token
   explicit MutexLockMaybe(Mutex *mu) :
 ^
In file included from /usr/local/include/google/protobuf/arena.h:48:0,
 from 
/home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.h:23,
 from 
/home/agozhiy/git_repo/drill/contrib/native/client/src/protobuf/BitControl.pb.cc:4:
/usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
/usr/include/c++/5/typeinfo:39:37: error: expected unqualified-id before end of 
line
/usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
/usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
/usr/include/c++/5/typeinfo:39:37: error: expected '}' before end of line
/usr/include/c++/5/typeinfo:39:37: error: expected declaration before end of 
line
src/protobuf/CMakeFiles/protomsgs.dir/build.make:62: recipe for target 
'src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o' failed
make[2]: *** [src/protobuf/CMakeFiles/protomsgs.dir/BitControl.pb.cc.o] Error 1
CMakeFiles/Makefile2:223: recipe for target 
'src/protobuf/CMakeFiles/protomsgs.dir/all' failed
make[1]: *** [src/protobuf/CMakeFiles/protomsgs.dir/all] Error 2
Makefile:94: recipe for target 'all' failed
make: *** [all] Error 2
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (DRILL-6976) SchemaChangeException happens when using split function in subquery if it returns empty result.

2019-03-12 Thread Anton Gozhiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Gozhiy resolved DRILL-6976.
-
Resolution: Fixed

> SchemaChangeException happens when using split function in subquery if it 
> returns empty result.
> ---
>
> Key: DRILL-6976
> URL: https://issues.apache.org/jira/browse/DRILL-6976
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Anton Gozhiy
>Assignee: Bohdan Kazydub
>Priority: Major
> Fix For: 1.16.0
>
>
> *Query:*
> {code:sql}
> select substr(col, 2, 3) 
> from (select split(n_comment, ' ') [3] col 
>   from cp.`tpch/nation.parquet` 
>   where n_nationkey = -1 
>   group by n_comment 
>   order by n_comment 
>   limit 5);
> {code}
> *Expected result:*
> {noformat}
> +-+
> | EXPR$0  |
> +-+
> +-+
> {noformat}
> *Actual result:*
> {noformat}
> Error: SYSTEM ERROR: SchemaChangeException: Failure while trying to 
> materialize incoming schema.  Errors:
>  
> Error in expression at index -1.  Error: Missing function implementation: 
> [castVARCHAR(NULL-OPTIONAL, BIGINT-REQUIRED)].  Full expression: --UNKNOWN 
> EXPRESSION--..
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: 86515d74-7b9c-4949-8ece-c9c17e00afc3 on userf87d-pc:31010]
>   (org.apache.drill.exec.exception.SchemaChangeException) Failure while 
> trying to materialize incoming schema.  Errors:
>  
> Error in expression at index -1.  Error: Missing function implementation: 
> [castVARCHAR(NULL-OPTIONAL, BIGINT-REQUIRED)].  Full expression: --UNKNOWN 
> EXPRESSION--..
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():498
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():422
> org.apache.hadoop.security.UserGroupInformation.doAs():1746
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():284
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1149
> java.util.concurrent.ThreadPoolExecutor$Worker.run():624
> java.lang.Thread.run():748 (state=,code=0)
> {noformat}
> *Note:* Filter "where n_nationkey = -1" doesn't return any rows. In case of " 
> = 1", for example, the query will return result without error.
> *Workaround:* Use cast on the split function, like
> {code:sql}
> cast(split(n_comment, ' ') [3] as varchar)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7084) ResultSet getObject method throws not implemented exception if the column type is NULL

2019-03-07 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7084:
---

 Summary: ResultSet getObject method throws not implemented 
exception if the column type is NULL
 Key: DRILL-7084
 URL: https://issues.apache.org/jira/browse/DRILL-7084
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


This method is used by some tools, for example DBeaver. Not reproduced with 
sqlline or Drill Web-UI.

*Query:*
{code:sql}
select coalesce(n_name1, n_name2) from cp.`tpch/nation.parquet` limit 1;
{code}

*Expected result:*
null

*Actual result:*
Exception is thrown:
{noformat}
java.lang.RuntimeException: not implemented
at 
oadd.org.apache.calcite.avatica.AvaticaSite.notImplemented(AvaticaSite.java:421)
at oadd.org.apache.calcite.avatica.AvaticaSite.get(AvaticaSite.java:380)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.getObject(DrillResultSetImpl.java:183)
at 
org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCResultSetImpl.getObject(JDBCResultSetImpl.java:628)
at 
org.jkiss.dbeaver.model.impl.jdbc.data.handlers.JDBCObjectValueHandler.fetchColumnValue(JDBCObjectValueHandler.java:60)
at 
org.jkiss.dbeaver.model.impl.jdbc.data.handlers.JDBCAbstractValueHandler.fetchValueObject(JDBCAbstractValueHandler.java:49)
at 
org.jkiss.dbeaver.ui.controls.resultset.ResultSetDataReceiver.fetchRow(ResultSetDataReceiver.java:122)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.fetchQueryData(SQLQueryJob.java:729)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.executeStatement(SQLQueryJob.java:465)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.lambda$0(SQLQueryJob.java:392)
at org.jkiss.dbeaver.model.DBUtils.tryExecuteRecover(DBUtils.java:1598)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:390)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.extractData(SQLQueryJob.java:822)
at 
org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:2532)
at 
org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:93)
at org.jkiss.dbeaver.model.DBUtils.tryExecuteRecover(DBUtils.java:1598)
at 
org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:91)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:101)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)

{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7041) CompileException happens if a nested coalesce function returns null

2019-02-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7041:
---

 Summary: CompileException happens if a nested coalesce function 
returns null
 Key: DRILL-7041
 URL: https://issues.apache.org/jira/browse/DRILL-7041
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy


*Query:*
{code:sql}
select coalesce(coalesce(n_name1, n_name2), n_name) from 
cp.`tpch/nation.parquet`
{code}

*Expected result:*
Values from "n_name" column should be returned

*Actual result:*
An exception happens:
{noformat}
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
CompileException: Line 57, Column 27: Assignment conversion not possible from 
type "org.apache.drill.exec.expr.holders.NullableVarCharHolder" to type 
"org.apache.drill.exec.vector.UntypedNullHolder" Fragment 0:0 Please, refer to 
logs for more information. [Error Id: e54d5bfd-604d-4a39-b62f-33bb964e5286 on 
userf87d-pc:31010] (org.apache.drill.exec.exception.SchemaChangeException) 
Failure while attempting to load generated class 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():573
 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143 
org.apache.drill.exec.record.AbstractRecordBatch.next():186 
org.apache.drill.exec.physical.impl.BaseRootExec.next():104 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 
org.apache.drill.exec.physical.impl.BaseRootExec.next():94 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284 
java.security.AccessController.doPrivileged():-2 
javax.security.auth.Subject.doAs():422 
org.apache.hadoop.security.UserGroupInformation.doAs():1746 
org.apache.drill.exec.work.fragment.FragmentExecutor.run():284 
org.apache.drill.common.SelfCleaningRunnable.run():38 
java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
java.lang.Thread.run():748 Caused By 
(org.apache.drill.exec.exception.ClassTransformationException) 
java.util.concurrent.ExecutionException: 
org.apache.drill.exec.exception.ClassTransformationException: Failure 
generating transformation classes for value: package 
org.apache.drill.exec.test.generated; import 
org.apache.drill.exec.exception.SchemaChangeException; import 
org.apache.drill.exec.expr.holders.BigIntHolder; import 
org.apache.drill.exec.expr.holders.BitHolder; import 
org.apache.drill.exec.expr.holders.NullableVarBinaryHolder; import 
org.apache.drill.exec.expr.holders.NullableVarCharHolder; import 
org.apache.drill.exec.expr.holders.VarCharHolder; import 
org.apache.drill.exec.ops.FragmentContext; import 
org.apache.drill.exec.record.RecordBatch; import 
org.apache.drill.exec.vector.UntypedNullHolder; import 
org.apache.drill.exec.vector.UntypedNullVector; import 
org.apache.drill.exec.vector.VarCharVector; public class ProjectorGen35 { 
BigIntHolder const6; BitHolder constant9; UntypedNullHolder constant13; 
VarCharVector vv14; UntypedNullVector vv19; public void doEval(int inIndex, int 
outIndex) throws SchemaChangeException { { UntypedNullHolder out0 = new 
UntypedNullHolder(); if (constant9 .value == 1) { if (constant13 .isSet!= 0) { 
out0 = constant13; } } else { VarCharHolder out17 = new VarCharHolder(); { 
out17 .buffer = vv14 .getBuffer(); long startEnd = vv14 
.getAccessor().getStartEnd((inIndex)); out17 .start = ((int) startEnd); out17 
.end = ((int)(startEnd >> 32)); } // start of eval portion of 
convertToNullableVARCHAR function. // NullableVarCharHolder out18 = new 
NullableVarCharHolder(); { final NullableVarCharHolder output = new 
NullableVarCharHolder(); VarCharHolder input = out17; 
GConvertToNullableVarCharHolder_eval: { output.isSet = 1; output.start = 
input.start; output.end = input.end; output.buffer = input.buffer; } out18 = 
output; } // end of eval portion of convertToNullableVARCHAR function. 
// if (out18 .isSet!= 0) { out0 = out18; } } if (!(out0 .isSet == 0)) { 
vv19 .getMutator().set((outIndex), out0 .isSet, out0); } } } public void 
doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing) 
throws SchemaChangeException { { UntypedNullHolder out1 = new 
UntypedNullHolder(); NullableVarBinaryHolder out2 = new 
NullableVarBinaryHolder(); /** start SETUP for function isnotnull **/ { 
NullableVarBinaryHolder input = out2; 
GNullOpNullableVarBinaryHolder$IsNotNull_setup: {} } /** end SETUP for function 
isnotnull **/ // start of eval portion of isnotnull function. // 
BitHolder out3 = new BitHolder(); { final BitHolder out = new BitHolder(); 
NullableVarBinaryHolder input = out2; 

[jira] [Created] (DRILL-7040) Update Protocol Buffers syntax to proto3

2019-02-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7040:
---

 Summary: Update Protocol Buffers syntax to proto3
 Key: DRILL-7040
 URL: https://issues.apache.org/jira/browse/DRILL-7040
 Project: Apache Drill
  Issue Type: Task
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


Updating of protobuf library version is addressed by DRILL-6642.
Although we still use proto2 syntax. To update the syntax to proto3 we need to 
meet some requirements:
# Proto3 doesn't support required fields. So it is needed to change all 
existing required fields to optional. If we expect such fields to be always 
present in the messages, we need to revisit the approach.
# Custom default values are no more supported. And Drill uses custom defaults 
in some places. The impact from removal of them should be further investigated, 
but it would definitely require changes in logic.
# No more ability to determine if a missing field was not included, or was 
assigned the default value. Need investigation whether it is used in code.
# Support for nested groups is excluded from proto3. This shouldn't be a 
problem as they are not used in Drill.
# Protostuff and protobuf-maven-plugin should be also updated which may cause 
some compatibility issues.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7022) Partition pruning is not happening the first time after the metadata auto refresh.

2019-02-01 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7022:
---

 Summary: Partition pruning is not happening the first time after 
the metadata auto refresh.
 Key: DRILL-7022
 URL: https://issues.apache.org/jira/browse/DRILL-7022
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Data creation:*
# Create table:
{code:sql}
create table dfs.tmp.`orders` 
partition by (o_orderstatus)
as select * from cp.`tpch/orders.parquet`
{code}
# Create table metadata:
{code:sql}
refresh table metadata dfs.tmp.`orders`
{code}

*Steps:*
# Modify the table to trigger metadata auto refresh:
{noformat}
hadoop fs -mkdir /tmp/orders/111
{noformat}
# Run the query:
{code:sql}
explain plan for 
select * from dfs.tmp.`orders` 
where o_orderstatus = 'O' and o_orderdate < '1995-03-10'
{code}

*Expected result:*
Partition pruning happens:
{noformat}
... numFiles=1, numRowGroups=1, usedMetadataFile=true ...
{noformat}

*Actual result:*
Partition pruning doesn't happen:
{noformat}
... numFiles=1, numRowGroups=3, usedMetadataFile=true
{noformat}

*Note:* It is being reproduced only the first time after auto refresh, after 
repeating the query it works as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7010) Wrong result is returned if filtering by a decimal column using old parquet data with old metadata file.

2019-01-28 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7010:
---

 Summary: Wrong result is returned if filtering by a decimal column 
using old parquet data with old metadata file.
 Key: DRILL-7010
 URL: https://issues.apache.org/jira/browse/DRILL-7010
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy
 Attachments: partsupp_old.zip, supplier_old.zip

*Prerequisites:*
- The data was generated by Drill 1.14.0-SNAPSHOT (commit 
4c4953bcab4886be14fc9b7f95a77caa86a7629f). See attachment.
- set store.parquet.reader.strings_signed_min_max = "true"

*Query #1:*
{code:sql}
select *
from dfs.tmp.`supplier_old`
where not s_acctbal > -900
{code}

*Expected result:*
{noformat}
65  Supplier#00065  BsAnHUmSFArppKrM22  32-444-835-2434 
-963.79 l ideas wake carefully around the regular packages. furiously ruthless 
pinto bea
65  Supplier#00065  BsAnHUmSFArppKrM22  32-444-835-2434 
-963.79 l ideas wake carefully around the regular packages. furiously ruthless 
pinto bea
65  Supplier#00065  BsAnHUmSFArppKrM22  32-444-835-2434 
-963.79 l ideas wake carefully around the regular packages. furiously ruthless 
pinto bea
22  Supplier#00022  okiiQFk 8lm6EVX6Q0,bEcO 4   14-144-830-2814 
-966.20  ironically among the deposits. closely expre
22  Supplier#00022  okiiQFk 8lm6EVX6Q0,bEcO 4   14-144-830-2814 
-966.20  ironically among the deposits. closely expre
22  Supplier#00022  okiiQFk 8lm6EVX6Q0,bEcO 4   14-144-830-2814 
-966.20  ironically among the deposits. closely expre
{noformat}

*Actual result:*
{noformat}
65  Supplier#00065  BsAnHUmSFArppKrM22  32-444-835-2434 
-963.79 l ideas wake carefully around the regular packages. furiously ruthless 
pinto bea
65  Supplier#00065  BsAnHUmSFArppKrM22  32-444-835-2434 
-963.79 l ideas wake carefully around the regular packages. furiously ruthless 
pinto bea
65  Supplier#00065  BsAnHUmSFArppKrM22  32-444-835-2434 
-963.79 l ideas wake carefully around the regular packages. furiously ruthless 
pinto bea
{noformat}

*Query #2*
{code:sql}
select ps_availqty, ps_supplycost, ps_comment
from dfs.tmp.`partsupp_old`
where ps_supplycost > 999.9
{code}

*Expected result:*
{noformat}
5136999.92  lets grow carefully. slyly silent ideas about the foxes nag 
blithely ironi
8324999.93  ly final instructions. closely final deposits nag furiously 
alongside of the furiously dogged theodolites. blithely unusual theodolites are 
furi
5070999.99   ironic, special deposits. carefully final deposits haggle 
fluffily. furiously final foxes use furiously furiously ironic accounts. package
6915999.95  fluffily unusual packages doubt even, regular requests. ironic 
requests detect carefully blithely silen
1761999.95  lyly about the permanently ironic instructions. carefully 
ironic pinto beans
2120999.97  ts haggle blithely about the pending, regular ideas! e
1615999.92  riously ironic foxes detect fluffily across the regular packages
{noformat}

*Actual result:*
No data is returned.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6991) Kerberos ticket is being dumped in the log if log level is "debug" for stdout

2019-01-22 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6991:
---

 Summary: Kerberos ticket is being dumped in the log if log level 
is "debug" for stdout 
 Key: DRILL-6991
 URL: https://issues.apache.org/jira/browse/DRILL-6991
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Prerequisites:*
 # Drill is installed on cluster with Kerberos security
 # Into conf/logback.xml, set the following log level:
{code:xml}
  


  
{code}

*Steps:*
# Start Drill
# Connect using sqlline using the following string:
{noformat}
bin/sqlline -u "jdbc:drill:zk=;principal="
{noformat}

*Expected result:*
No sensitive information should be displayed

*Actual result:*
Kerberos  ticket and session key are being dumped into console output:
{noformat}
14:35:38.806 [TGT Renewer for mapr/node1.cluster.com@NODE1] DEBUG 
o.a.h.security.UserGroupInformation - Found tgt Ticket (hex) = 
: 61 82 01 3D 30 82 01 39   A0 03 02 01 05 A1 07 1B  a..=0..9
0010: 05 4E 4F 44 45 31 A2 1A   30 18 A0 03 02 01 02 A1  .NODE1..0...
0020: 11 30 0F 1B 06 6B 72 62   74 67 74 1B 05 4E 4F 44  .0...krbtgt..NOD
0030: 45 31 A3 82 01 0B 30 82   01 07 A0 03 02 01 12 A1  E10.
0040: 03 02 01 01 A2 81 FA 04   81 F7 03 8D A9 FA 7D 89  
0050: 1B DF 37 B7 4D E6 6C 99   3E 8F FA 48 D9 9A 79 F3  ..7.M.l.>..H..y.
0060: 92 34 7F BF 67 1E 77 4A   2F C9 AF 82 93 4E 46 1D  .4..g.wJ/NF.
0070: 41 74 B0 AF 41 A8 8B 02   71 83 CC 14 51 72 60 EE  At..A...q...Qr`.
0080: 29 67 14 F0 A6 33 63 07   41 AA 8D DC 7B 5B 41 F3  )g...3c.A[A.
0090: 83 48 8B 2A 0B 4D 6D 57   9A 6E CF 6B DC 0B C0 D1  .H.*.MmW.n.k
00A0: 83 BB 27 40 88 7E 9F 2B   D1 FD A8 6A E1 BF F6 CC  ..'@...+...j
00B0: 0E 0C FB 93 5D 69 9A 8B   11 88 0C F2 7C E1 FD 04  ]i..
00C0: F5 AB 66 0C A4 A4 7B 30   D1 7F F1 2D D6 A1 52 D1  ..f0...-..R.
00D0: 79 59 F2 06 CB 65 FB 73   63 1D 5B E9 4F 28 73 EB  yY...e.sc.[.O(s.
00E0: 72 7F 04 46 34 56 F4 40   6C C0 2C 39 C0 5B C6 25  r..F4V.@l.,9.[.%
00F0: ED EF 64 07 CE ED 35 9D   D7 91 6C 8F C9 CE 16 F5  ..d...5...l.
0100: CA 5E 6F DE 08 D2 68 30   C7 03 97 E7 C0 FF D9 52  .^o...h0...R
0110: F8 1D 2F DB 63 6D 12 4A   CD 60 AD D0 BA FA 4B CF  ../.cm.J.`K.
0120: 2C B9 8C CA 5A E6 EC 10   5A 0A 1F 84 B0 80 BD 39  ,...Z...Z..9
0130: 42 2C 33 EB C0 AA 0D 44   F0 F4 E9 87 24 43 BB 9A  B,3D$C..
0140: 52 R

Client Principal = mapr/node1.cluster.com@NODE1
Server Principal = krbtgt/NODE1@NODE1
Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=
: 50 DA D1 D7 91 D3 64 BE   45 7B D8 02 25 81 18 25  P.d.E...%..%
0010: DA 59 4F BA 76 67 BB 39   9C F7 17 46 A7 C5 00 E2  .YO.vg.9...F
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6982) Affected rows count is not returned by Drill if return_result_set_for_ddl is false

2019-01-16 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6982:
---

 Summary: Affected rows count is not returned by Drill if 
return_result_set_for_ddl is false
 Key: DRILL-6982
 URL: https://issues.apache.org/jira/browse/DRILL-6982
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Prerequisites:*
{code:sql}
set `exec.query.return_result_set_for_ddl`= false;
{code}

*Query:*
{code:sql}
create table dfs.tmp.`nation as select * from cp.`tpch/nation.parquet`;
{code}

*Expected result:*
Drill should return the number of affected rows (25 in this case)

*Actual Result:*
The table was created, but affected rows count wasn't returned:
{noformat}
No rows affected (1.755 seconds)
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6976) SchemaChangeException happens when using split function in subquery if it returns empty result.

2019-01-14 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6976:
---

 Summary: SchemaChangeException happens when using split function 
in subquery if it returns empty result.
 Key: DRILL-6976
 URL: https://issues.apache.org/jira/browse/DRILL-6976
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Query:*
{code:sql}
select substr(col, 2, 3) 
from (select split(n_comment, ' ') [3] col 
  from cp.`tpch/nation.parquet` 
  where n_nationkey = -1 
  group by n_comment 
  order by n_comment 
  limit 5);
{code}

*Expected result:*
{noformat}
+-+
| EXPR$0  |
+-+
+-+
{noformat}

*Actual result:*
{noformat}
Error: SYSTEM ERROR: SchemaChangeException: Failure while trying to materialize 
incoming schema.  Errors:
 
Error in expression at index -1.  Error: Missing function implementation: 
[castVARCHAR(NULL-OPTIONAL, BIGINT-REQUIRED)].  Full expression: --UNKNOWN 
EXPRESSION--..

Fragment 0:0

Please, refer to logs for more information.

[Error Id: 86515d74-7b9c-4949-8ece-c9c17e00afc3 on userf87d-pc:31010]

  (org.apache.drill.exec.exception.SchemaChangeException) Failure while trying 
to materialize incoming schema.  Errors:
 
Error in expression at index -1.  Error: Missing function implementation: 
[castVARCHAR(NULL-OPTIONAL, BIGINT-REQUIRED)].  Full expression: --UNKNOWN 
EXPRESSION--..

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():498

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.physical.impl.BaseRootExec.next():104
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
org.apache.drill.exec.physical.impl.BaseRootExec.next():94
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284
java.security.AccessController.doPrivileged():-2
javax.security.auth.Subject.doAs():422
org.apache.hadoop.security.UserGroupInformation.doAs():1746
org.apache.drill.exec.work.fragment.FragmentExecutor.run():284
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1149
java.util.concurrent.ThreadPoolExecutor$Worker.run():624
java.lang.Thread.run():748 (state=,code=0)
{noformat}

*Note:* Filter "where n_nationkey = -1" doesn't return any rows. In case of " = 
1", for example, the query will return result without error.

*Workaround:* Use cast on the split function, like
{code:sql}
cast(split(n_comment, ' ') [3] as varchar)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6936) TestGracefulShutdown.gracefulShutdownThreadShouldBeInitializedBeforeClosingDrillbit fails if loopback address is set in hosts

2018-12-28 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6936:
---

 Summary: 
TestGracefulShutdown.gracefulShutdownThreadShouldBeInitializedBeforeClosingDrillbit
 fails if loopback address is set in hosts
 Key: DRILL-6936
 URL: https://issues.apache.org/jira/browse/DRILL-6936
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Prerequisites:*
- Loopback address is set for the host at /etc/hosts

*Steps:*
# Run the test 
*TestGracefulShutdown.gracefulShutdownThreadShouldBeInitializedBeforeClosingDrillbit*

*Expected result:*
The test should ignore loopback address setting and pass.

*Actual result:*
The test fails:
{noformat}
16:06:51.921 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 64.2 
KiB(242.8 KiB), h: 71.1 MiB(336.2 MiB), nh: 91.9 KiB(164.3 MiB)): 
gracefulShutdownThreadShouldBeInitializedBeforeClosingDrillbit(org.apache.drill.test.TestGracefulShutdown)
java.lang.AssertionError: null
at 
org.apache.drill.test.TestGracefulShutdown.gracefulShutdownThreadShouldBeInitializedBeforeClosingDrillbit(TestGracefulShutdown.java:207)
 ~[test-classes/:na]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181]
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6910) A filtering column remains in scan when filter pruning happens.

2018-12-18 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6910:
---

 Summary: A filtering column remains in scan when filter pruning 
happens.
 Key: DRILL-6910
 URL: https://issues.apache.org/jira/browse/DRILL-6910
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


*Data:*
{code:sql}
create table dfs.tmp.`nation` as select * from cp.`tpch/nation.parquet`;
{code}

*Query:*
{code:sql}
explain plan for select n_nationkey from dfs.tmp.`nation` where n_regionkey < 10
{code}

*Expected result:*
The filtering column (n_regionkey) should not be present in scan operator.

*Actual result:*
It remains in scan in spite of filter pruning.
{noformat}
00-00Screen : rowType = RecordType(ANY n_nationkey): rowcount = 25.0, 
cumulative cost = {52.5 rows, 77.5 cpu, 50.0 io, 0.0 network, 0.0 memory}, id = 
112988
00-01  Project(n_nationkey=[$1]) : rowType = RecordType(ANY n_nationkey): 
rowcount = 25.0, cumulative cost = {50.0 rows, 75.0 cpu, 50.0 io, 0.0 network, 
0.0 memory}, id = 112987
00-02Scan(table=[[dfs, tmp, nation]], groupscan=[ParquetGroupScan 
[entries=[ReadEntryWithPath [path=maprfs:///tmp/nation]], 
selectionRoot=maprfs:/tmp/nation, numFiles=1, numRowGroups=1, 
usedMetadataFile=false, columns=[`n_regionkey`, `n_nationkey`]]]) : rowType = 
RecordType(ANY n_regionkey, ANY n_nationkey): rowcount = 25.0, cumulative cost 
= {25.0 rows, 50.0 cpu, 50.0 io, 0.0 network, 0.0 memory}, id = 112986
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6905) ClassCastException happens when combining filters with numeric and varchar literals

2018-12-14 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6905:
---

 Summary: ClassCastException happens when combining filters with 
numeric and varchar literals
 Key: DRILL-6905
 URL: https://issues.apache.org/jira/browse/DRILL-6905
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


*Query:*
{code:sql}
select * from cp.`tpch/nation.parquet` where n_nationkey < 5 or n_nationkey = 
'10'
{code}

*Expected result:*
The query should run successfully.

*Actual result:* 
ClassCastException happens
{noformat}
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
ClassCastException Please, refer to logs for more information. [Error Id: 
6651eab4-2efe-4275-8816-71e306396d51 on node1.cluster.com:31010] 
(org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
during fragment initialization: Error while applying rule 
ReduceExpressionsRule(Filter), args 
[rel#566080:LogicalFilter.NONE.ANY([]).[](input=rel#566079:Subset#0.ENUMERABLE.ANY([]).[],condition=OR(<($1,
 5), =($1, '10')))] org.apache.drill.exec.work.foreman.Foreman.run():300 
java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
java.lang.Thread.run():748 Caused By (java.lang.RuntimeException) Error while 
applying rule ReduceExpressionsRule(Filter), args 
[rel#566080:LogicalFilter.NONE.ANY([]).[](input=rel#566079:Subset#0.ENUMERABLE.ANY([]).[],condition=OR(<($1,
 5), =($1, '10')))] 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():236 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp():648 
org.apache.calcite.tools.Programs$RuleSetProgram.run():339 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform():431 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform():371 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRawDrel():251
 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():320
 org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():177 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():155 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():81 
org.apache.drill.exec.work.foreman.Foreman.runSQL():584 
org.apache.drill.exec.work.foreman.Foreman.run():272 
java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
java.lang.Thread.run():748 Caused By (java.lang.ClassCastException) null
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6891) Drill cannot cast required type to optional and vise versa that may cause failures of functions with more than one argument.

2018-12-10 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6891:
---

 Summary: Drill cannot cast required type to optional and vise 
versa that may cause failures of functions with more than one argument.
 Key: DRILL-6891
 URL: https://issues.apache.org/jira/browse/DRILL-6891
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Query:*

{code:sql}
with t (pkey, lnum) as (select l_partkey,
case when l_linenumber < 3 then null else l_linenumber end from 
cp.`tpch/lineitem.parquet`)
select covar_samp(pkey, lnum) from t limit 5
{code}

*Note:* Case statement is needed to transform required data mod to optional.

*Expected result:*
The function should return result.

*Actual result:*
Exception happens: Missing function implementation: [covar_samp(INT-REQUIRED, 
INT-OPTIONAL)]
{noformat}
SYSTEM ERROR: Drill Remote Exception


Please, refer to logs for more information.


  (org.apache.drill.exec.exception.SchemaChangeException) Failure while 
materializing expression. 
Error in expression at index -1.  Error: Missing function implementation: 
[covar_samp(INT-REQUIRED, INT-OPTIONAL)].  Full expression: --UNKNOWN 
EXPRESSION--.

org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.createAggregatorInternal():513

org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.createAggregator():434

org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.buildSchema():181
org.apache.drill.exec.record.AbstractRecordBatch.next():161
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.record.AbstractRecordBatch.next():126
org.apache.drill.exec.record.AbstractRecordBatch.next():116
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63

org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143
org.apache.drill.exec.record.AbstractRecordBatch.next():186
org.apache.drill.exec.physical.impl.BaseRootExec.next():104
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
org.apache.drill.exec.physical.impl.BaseRootExec.next():94
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284
...():0
org.apache.hadoop.security.UserGroupInformation.doAs():1669
org.apache.drill.exec.work.fragment.FragmentExecutor.run():284
org.apache.drill.common.SelfCleaningRunnable.run():38
...():0
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6884) Add support for directory-based auto-partitioning

2018-12-06 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6884:
---

 Summary: Add support for directory-based auto-partitioning
 Key: DRILL-6884
 URL: https://issues.apache.org/jira/browse/DRILL-6884
 Project: Apache Drill
  Issue Type: New Feature
Reporter: Anton Gozhiy


Currently, during partitioning, Drill creates separate files, but not separate 
directories, for different partitions. That works only for the parquet format. 
For other formats you'll need to create partitions manually as separate 
directories. The ability to do it automatically would be useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6857) Limit is not being pushed into scan when selecting from a parquet file with multiple row groups.

2018-11-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6857:
---

 Summary: Limit is not being pushed into scan when selecting from a 
parquet file with multiple row groups.
 Key: DRILL-6857
 URL: https://issues.apache.org/jira/browse/DRILL-6857
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy
 Attachments: DRILL_5796_test_data.parquet

*Data:*
A parquet file that contains more than one row group. Example is attached.

*Query:*
{code:sql}
explain plan for select * from dfs.tmp.`DRILL_5796_test_data.parquet` limit 1
{code}

*Expected result:*
numFiles=1, numRowGroups=1

*Actual result:*
numFiles=1, numRowGroups=3
{noformat}
00-00Screen : rowType = RecordType(DYNAMIC_STAR **): rowcount = 1.0, 
cumulative cost = {274.1 rows, 280.1 cpu, 270.0 io, 0.0 network, 0.0 memory}, 
id = 13671
00-01  Project(**=[$0]) : rowType = RecordType(DYNAMIC_STAR **): rowcount = 
1.0, cumulative cost = {274.0 rows, 280.0 cpu, 270.0 io, 0.0 network, 0.0 
memory}, id = 13670
00-02SelectionVectorRemover : rowType = RecordType(DYNAMIC_STAR **): 
rowcount = 1.0, cumulative cost = {273.0 rows, 279.0 cpu, 270.0 io, 0.0 
network, 0.0 memory}, id = 13669
00-03  Limit(fetch=[1]) : rowType = RecordType(DYNAMIC_STAR **): 
rowcount = 1.0, cumulative cost = {272.0 rows, 278.0 cpu, 270.0 io, 0.0 
network, 0.0 memory}, id = 13668
00-04Limit(fetch=[1]) : rowType = RecordType(DYNAMIC_STAR **): 
rowcount = 1.0, cumulative cost = {271.0 rows, 274.0 cpu, 270.0 io, 0.0 
network, 0.0 memory}, id = 13667
00-05  Scan(table=[[dfs, tmp, DRILL_5796_test_data.parquet]], 
groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
[path=maprfs:///tmp/DRILL_5796_test_data.parquet]], 
selectionRoot=maprfs:/tmp/DRILL_5796_test_data.parquet, numFiles=1, 
numRowGroups=3, usedMetadataFile=false, columns=[`**`]]]) : rowType = 
RecordType(DYNAMIC_STAR **): rowcount = 270.0, cumulative cost = {270.0 rows, 
270.0 cpu, 270.0 io, 0.0 network, 0.0 memory}, id = 13666
{noformat}

*Note:*
The limit pushdown works with the same data partitioned by files (1 row group 
for a file )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6856) Wrong result returned if the query filters a boolean column with both "is true" and "is null" conditions

2018-11-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6856:
---

 Summary: Wrong result returned if the query filters a boolean 
column with both "is true" and "is null" conditions
 Key: DRILL-6856
 URL: https://issues.apache.org/jira/browse/DRILL-6856
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy
 Attachments: 0_0_0.parquet

*Data:*
A parquet file with a boolean column that contains null values.
An example is attached.

*Query:*

{code:sql}
select bool_col from dfs.tmp.`Test_data` where bool_col is true or bool_col is 
null
{code}

*Result:*
{noformat}
null
null
{noformat}

*Plan:*
{noformat}
00-00Screen : rowType = RecordType(ANY bool_col): rowcount = 3.75, 
cumulative cost = {37.875 rows, 97.875 cpu, 15.0 io, 0.0 network, 0.0 memory}, 
id = 1980
00-01  Project(bool_col=[$0]) : rowType = RecordType(ANY bool_col): 
rowcount = 3.75, cumulative cost = {37.5 rows, 97.5 cpu, 15.0 io, 0.0 network, 
0.0 memory}, id = 1979
00-02SelectionVectorRemover : rowType = RecordType(ANY bool_col): 
rowcount = 3.75, cumulative cost = {33.75 rows, 93.75 cpu, 15.0 io, 0.0 
network, 0.0 memory}, id = 1978
00-03  Filter(condition=[IS NULL($0)]) : rowType = RecordType(ANY 
bool_col): rowcount = 3.75, cumulative cost = {30.0 rows, 90.0 cpu, 15.0 io, 
0.0 network, 0.0 memory}, id = 1977
00-04Scan(table=[[dfs, tmp, Test_data]], 
groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
[path=maprfs:///tmp/Test_data]], selectionRoot=maprfs:/tmp/Test_data, 
numFiles=1, numRowGroups=1, usedMetadataFile=false, columns=[`bool_col`]]]) : 
rowType = RecordType(ANY bool_col): rowcount = 15.0, cumulative cost = {15.0 
rows, 15.0 cpu, 15.0 io, 0.0 network, 0.0 memory}, id = 1976
{noformat}

*Notes:* 
- "true" values were not included in the result though they should have.
- Result is correct if use "bool_col = true" instead of "is true"
- In the plan you can see that "is true" condition is absent in the Filter 
operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6733) Unit tests from KafkaFilterPushdownTest are failing in some environments.

2018-09-07 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6733:
---

 Summary: Unit tests from KafkaFilterPushdownTest are failing in 
some environments.
 Key: DRILL-6733
 URL: https://issues.apache.org/jira/browse/DRILL-6733
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0, 1.15.0
Reporter: Anton Gozhiy


*Steps:*
 # Build the Drill project without skipping the unit tests:
{noformat}
mvn clean install
{noformat}
Alternatively, if the project was already built, run tests for Kafka:
{noformat}
mvn test -pl contrib/storage-kafka
{noformat}

*Expected results:*
All tests are passed.

*Actual results:*
 Tests from KafkaFilterPushdownTest are failing:
{noformat}
--- 
T E S T S 
--- 
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: -1,283,514.348 
sec - in org.apache.drill.exec.store.kafka.MessageIteratorTest 
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: -1,283,513.783 
sec - in org.apache.drill.exec.store.kafka.KafkaQueriesTest 
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: -1,283,512.35 
sec - in org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest 
Running org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec - in 
org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
Running org.apache.drill.exec.store.kafka.KafkaQueriesTest 
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 152.2 sec - in 
org.apache.drill.exec.store.kafka.KafkaQueriesTest 
Running org.apache.drill.exec.store.kafka.MessageIteratorTest 
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.036 sec - in 
org.apache.drill.exec.store.kafka.MessageIteratorTest 
Running org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.611 sec - in 
org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest 
Running org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest 
13:09:29.511 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 213 
B(139.3 KiB), h: 20.0 MiB(719.0 MiB), nh: 794.5 KiB(120.1 MiB)): 
testPushdownWithOr(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest) 
java.lang.AssertionError: expected:<26> but was:<0> 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
 ~[test-classes/:na] 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.runKafkaSQLVerifyCount(KafkaTestBase.java:69)
 ~[test-classes/:na] 
   at 
org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest.testPushdownWithOr(KafkaFilterPushdownTest.java:259)
 ~[test-classes/:na] 
   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181] 
13:09:33.307 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 377 
B(139.7 KiB), h: 18.5 MiB(743.2 MiB), nh: 699.5 KiB(120.9 MiB)): 
testPushdownWithAndOrCombo2(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
 
java.lang.AssertionError: expected:<4> but was:<0> 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
 ~[test-classes/:na] 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.runKafkaSQLVerifyCount(KafkaTestBase.java:69)
 ~[test-classes/:na] 
   at 
org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest.testPushdownWithAndOrCombo2(KafkaFilterPushdownTest.java:316)
 ~[test-classes/:na] 
   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181] 
13:09:44.424 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 0 
B(139.7 KiB), h: 11.7 MiB(774.6 MiB), nh: 537.1 KiB(122.3 MiB)): 
testPushdownOnTimestamp(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest)
 
java.lang.AssertionError: expected:<20> but was:<0> 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
 ~[test-classes/:na] 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.runKafkaSQLVerifyCount(KafkaTestBase.java:69)
 ~[test-classes/:na] 
   at 
org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest.testPushdownOnTimestamp(KafkaFilterPushdownTest.java:92)
 ~[test-classes/:na] 
   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181] 
13:09:48.162 [main] ERROR org.apache.drill.TestReporter - Test Failed (d: 0 
B(139.7 KiB), h: 13.3 MiB(787.9 MiB), nh: 379.9 KiB(122.7 MiB)): 
testPushdownOnOffset(org.apache.drill.exec.store.kafka.KafkaFilterPushdownTest) 
java.lang.AssertionError: expected:<5> but was:<0> 
   at 
org.apache.drill.exec.store.kafka.KafkaTestBase.logResultAndVerifyRowCount(KafkaTestBase.java:76)
 ~[test-classes/:na] 
   at 

[jira] [Created] (DRILL-6693) When a query is started from Drill Web Console, the UI becomes inaccessible until the query is completed

2018-08-16 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6693:
---

 Summary: When a query is started from Drill Web Console, the UI 
becomes inaccessible until the query is completed
 Key: DRILL-6693
 URL: https://issues.apache.org/jira/browse/DRILL-6693
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


*Steps:*
# From Web UI, run the following query:
{noformat}
select * 
from (
select employee_id, full_name, first_name, last_name, position_id, 
position_title, store_id, department_id, birth_date, hire_date, salary, 
supervisor_id, education_level, marital_status, gender, management_role 
from cp.`employee.json` 
union
select employee_id, full_name, first_name, last_name, position_id, 
position_title, store_id, department_id, birth_date, hire_date, salary, 
supervisor_id, education_level, marital_status, gender, management_role 
from cp.`employee.json` 
union
select employee_id, full_name, first_name, last_name, position_id, 
position_title, store_id, department_id, birth_date, hire_date, salary, 
supervisor_id, education_level, marital_status, gender, management_role
from cp.`employee.json`)
where last_name = 'Blumberg'
{noformat}
# While query is running, try open the Profiles page (or any other). If It 
completes too fast, add some unions to the query above.

*Expected result:*
Profiles page should be opened. The running query should be listed.

*Actual result:*
The Web UI hangs until the query completes.

*Note:*
If the query is started from sqlline, everything is fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6686) Exception happens when trying to filter be id from a MaprDB json table

2018-08-14 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6686:
---

 Summary: Exception happens when trying to filter be id from a 
MaprDB json table
 Key: DRILL-6686
 URL: https://issues.apache.org/jira/browse/DRILL-6686
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy
 Attachments: lineitem.json

*Prerequisites:*
- Put the attached json file to dfs:
{noformat}
hadoop fs -put -f ./lineitem.json /tmp/
{noformat}
- Import it to MapRDB:
{noformat}
mapr importJSON -idField "l_orderkey" -src /tmp/lineitem.json -dst /tmp/lineitem
{noformat}
- Create Hive External table:
{noformat}
CREATE EXTERNAL TABLE lineitem ( 
l_orderkey string, 
l_comment string, 
l_commitdate string,
l_discount string,
l_extendedprice string,
l_linenumber string,
l_linestatus string,
l_partkey string,
l_quantity string,
l_receiptdate string,
l_returnflag string,
l_shipdate string,
l_shipinstruct string,
l_shipmode string,
l_suppkey string,
l_tax int
) 
STORED BY 'org.apache.hadoop.hive.maprdb.json.MapRDBJsonStorageHandler' 
TBLPROPERTIES("maprdb.table.name" = "/tmp/lineitem","maprdb.column.id" = 
"l_orderkey");
{noformat}
- In Drill:
{noformat}
set store.hive.maprdb_json.optimize_scan_with_native_reader = true;
{noformat}

*Query:*
{code:sql}
select * from hive.`lineitem` where l_orderkey < 100
{code}

*Expected results:*
The query should return result

*Actual result:*
Exception happens:
{noformat}
SYSTEM ERROR: IllegalArgumentException: A INT value can not be used for '_id' 
field.



  (org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
during fragment initialization: Error while applying rule 
MapRDBPushFilterIntoScan:Filter_On_Scan, args 
[rel#1751:FilterPrel.PHYSICAL.SINGLETON([]).[](input=rel#1746:Subset#3.PHYSICAL.SINGLETON([]).[],condition=<($0,
 100)), 
rel#1745:ScanPrel.PHYSICAL.SINGLETON([]).[](groupscan=JsonTableGroupScan 
[ScanSpec=JsonScanSpec [tableName=/tmp/lineitem, condition=null], 
columns=[`_id`, `l_comment`, `l_commitdate`, `l_discount`, `l_extendedprice`, 
`l_linenumber`, `l_linestatus`, `l_partkey`, `l_quantity`, `l_receiptdate`, 
`l_returnflag`, `l_shipdate`, `l_shipinstruct`, `l_shipmode`, `l_suppkey`, 
`l_tax`, `**`]])]
org.apache.drill.exec.work.foreman.Foreman.run():294
java.util.concurrent.ThreadPoolExecutor.runWorker():1149
java.util.concurrent.ThreadPoolExecutor$Worker.run():624
java.lang.Thread.run():748
  Caused By (java.lang.RuntimeException) Error while applying rule 
MapRDBPushFilterIntoScan:Filter_On_Scan, args 
[rel#1751:FilterPrel.PHYSICAL.SINGLETON([]).[](input=rel#1746:Subset#3.PHYSICAL.SINGLETON([]).[],condition=<($0,
 100)), 
rel#1745:ScanPrel.PHYSICAL.SINGLETON([]).[](groupscan=JsonTableGroupScan 
[ScanSpec=JsonScanSpec [tableName=/tmp/lineitem, condition=null], 
columns=[`_id`, `l_comment`, `l_commitdate`, `l_discount`, `l_extendedprice`, 
`l_linenumber`, `l_linestatus`, `l_partkey`, `l_quantity`, `l_receiptdate`, 
`l_returnflag`, `l_shipdate`, `l_shipinstruct`, `l_shipmode`, `l_suppkey`, 
`l_tax`, `**`]])]
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():236
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp():652
org.apache.calcite.tools.Programs$RuleSetProgram.run():368
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform():430

org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel():460
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():182
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():145
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():83
org.apache.drill.exec.work.foreman.Foreman.runSQL():567
org.apache.drill.exec.work.foreman.Foreman.run():266
java.util.concurrent.ThreadPoolExecutor.runWorker():1149
java.util.concurrent.ThreadPoolExecutor$Worker.run():624
java.lang.Thread.run():748
  Caused By (java.lang.IllegalArgumentException) A INT value can not be used 
for '_id' field.
com.mapr.db.impl.ConditionLeaf.checkArgs():308
com.mapr.db.impl.ConditionLeaf.():100
com.mapr.db.impl.ConditionLeaf.():86
com.mapr.db.impl.ConditionLeaf.():82
com.mapr.db.impl.ConditionImpl.is():407
com.mapr.db.impl.ConditionImpl.is():402
com.mapr.db.impl.ConditionImpl.is():43

org.apache.drill.exec.store.mapr.db.json.JsonConditionBuilder.setIsCondition():127

org.apache.drill.exec.store.mapr.db.json.JsonConditionBuilder.createJsonScanSpec():181

org.apache.drill.exec.store.mapr.db.json.JsonConditionBuilder.visitFunctionCall():80

org.apache.drill.exec.store.mapr.db.json.JsonConditionBuilder.visitFunctionCall():33
org.apache.drill.common.expression.FunctionCall.accept():60
org.apache.drill.exec.store.mapr.db.json.JsonConditionBuilder.parseTree():48


[jira] [Created] (DRILL-6681) Add support for SHOW VIEWS statement;

2018-08-10 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6681:
---

 Summary: Add support for SHOW VIEWS statement;
 Key: DRILL-6681
 URL: https://issues.apache.org/jira/browse/DRILL-6681
 Project: Apache Drill
  Issue Type: New Feature
Reporter: Anton Gozhiy


Add ability to list views similar to "show tables".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6679) Error should be displayed when trying to connect Drill to unsupported version of Hive

2018-08-10 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6679:
---

 Summary: Error should be displayed when trying to connect Drill to 
unsupported version of Hive
 Key: DRILL-6679
 URL: https://issues.apache.org/jira/browse/DRILL-6679
 Project: Apache Drill
  Issue Type: Improvement
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


For example, there is no backward compatibility between Hive 2.3 and Hive 2.1. 
But it is possible to connect Drill with Hive 2.3 client to Hive 2.1, it just 
won't work correctly. So I suggest that enabling Hive storage plugin should not 
be allowed if Hive version is unsupported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6658) UnrecognizedPropertyException happens when submitting a physical plan for a Mapr-DB table query

2018-08-01 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6658:
---

 Summary: UnrecognizedPropertyException happens when submitting a 
physical plan for a Mapr-DB table query
 Key: DRILL-6658
 URL: https://issues.apache.org/jira/browse/DRILL-6658
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0, 1.14.0
Reporter: Anton Gozhiy


*Prerequisites:*
Create a MapR-DB table:
{noformat}
hadoop fs -mkdir /tmp/mdb_tabl
mapr dbshell
create /tmp/mdb_table/json
insert /tmp/mdb_table/json --value '{"_id":"movie002" , "title":"Developers 
on the Edge", "studio":"Command Line Studios"}'
insert /tmp/mdb_table/json --id movie003 --value '{"title":"The Golden 
Master", "studio":"All-Nighter"}'
{noformat}
 
 *Steps:*
*1.* Execute the following query:

{code:sql}
explain plan for select * from dfs.tmp.`mdb_table`;
{code}
*2.* Copy the json plan from the response
*3.* In Drill Web UI, go to the Query
*4.* Set checkbox to PHYSICAL, then execute the plan you copied

*Expected result:*
Result should be the same as for the query:
{code:sql}
select * from dfs.tmp.`mdb_table`;
{code}

*Actual result:*
Exception happens:
{code}
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
UnrecognizedPropertyException: Unrecognized field "startRow" (class 
org.apache.drill.exec.store.mapr.db.json.JsonScanSpec), not marked as ignorable 
(2 known properties: "tableName", "condition"]) at [Source: (String)"{ "head" : 
{ "version" : 1, "generator" : { "type" : "ExplainHandler", "info" : "" }, 
"type" : "APACHE_DRILL_PHYSICAL", "options" : [ ], "queue" : 0, 
"hasResourcePlan" : false, "resultMode" : "EXEC" }, "graph" : [ { "pop" : 
"maprdb-json-scan", "@id" : 2, "userName" : "mapr", "scanSpec" : { "tableName" 
: "maprfs:///tmp/mdb_table/json", "startRow" : "", "stopRow" : "", 
"serializedFilter" : null }, "storage" : { "type" : "file", "connection" : 
"maprfs:///", "config" : null, "workspaces" : { "root" "[truncated 1652 chars]; 
line: 1, column: 398] (through reference chain: 
org.apache.drill.exec.physical.PhysicalPlan["graph"]->java.util.ArrayList[0]->org.apache.drill.exec.store.mapr.db.json.JsonTableGroupScan["scanSpec"]->org.apache.drill.exec.store.mapr.db.json.JsonScanSpec["startRow"])
 [Error Id: 2c0542ab-295a-4b14-abba-d9ee08a9129a on node1.cluster.com:31010]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6639) Exception happens while displaying operator profiles if querying a Hive Mapr-DB with the native reader

2018-07-26 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6639:
---

 Summary: Exception happens while displaying operator profiles if 
querying a Hive Mapr-DB with the native reader
 Key: DRILL-6639
 URL: https://issues.apache.org/jira/browse/DRILL-6639
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


*Prerequisites:*
*1.* Create a MapR-DB JSON table:
{noformat}
hadoop fs -mkdir /tmp/mdb_tabl
mapr dbshell
create /tmp/mdb_table/json
insert /tmp/mdb_table/json --value '{"_id":"movie002" , "title":"Developers 
on the Edge", "studio":"Command Line Studios"}'
insert /tmp/mdb_table/json --id movie003 --value '{"title":"The Golden 
Master", "studio":"All-Nighter"}'
{noformat}
*2.* Create a Hive external table:
{noformat}
CREATE EXTERNAL TABLE mapr_db_json_hive_tbl ( 
movie_id string, title string, studio string) 
STORED BY 'org.apache.hadoop.hive.maprdb.json.MapRDBJsonStorageHandler' 
TBLPROPERTIES("maprdb.table.name" = "/tmp/mdb_table/json","maprdb.column.id" = 
"movie_id");
{noformat}
*3.* Enable Hive storage plugin in Drill:

{code:json}
{
  "type": "hive",
  "enabled": true,
  "configProps": {
"hive.metastore.uris": "thrift://localhost:9083",
"fs.default.name": "maprfs:///",
"hive.metastore.sasl.enabled": "false"
  }
}
{code}


*Steps:*
*1.* Run the following query:
{noformat}
select * from hive.`mapr_db_json_hive_tbl`
{noformat}
*2.* Open the query profile in the Drill UI, look at the Operator Profiles

*Expected result:*
Operator Profiles should be displayed

*Actual result:*
Exception displayed:
{noformat}
FreeMarker template error (DEBUG mode; use RETHROW in production!): Java method 
"org.apache.drill.exec.server.rest.profile.ProfileWrapper.getOperatorsOverview()"
 threw an exception when invoked on 
org.apache.drill.exec.server.rest.profile.ProfileWrapper object 
"org.apache.drill.exec.server.rest.profile.ProfileWrapper@36c94e5"; see cause 
exception in the Java stack trace.  FTL stack trace ("~" means 
nesting-related): - Failed at: ${model.getOperatorsOverview()?no_esc} [in 
template "rest/profile/profile.ftl" in macro "page_body" at line 338, column 
11] - Reached through: @page_body [in template "rest/generic.ftl" in macro 
"page_html" at line 99, column 9] - Reached through: @page_html [in template 
"rest/profile/profile.ftl" at line 474, column 1]  Java stack trace (for 
programmers):  freemarker.core._TemplateModelException: [... Exception 
message was already printed; see it above ...] at 
freemarker.ext.beans._MethodUtil.newInvocationTemplateModelException(_MethodUtil.java:289)
 at 
freemarker.ext.beans._MethodUtil.newInvocationTemplateModelException(_MethodUtil.java:252)
 at freemarker.ext.beans.SimpleMethodModel.exec(SimpleMethodModel.java:74) at 
freemarker.core.MethodCall._eval(MethodCall.java:65) at 
freemarker.core.Expression.eval(Expression.java:81) at 
freemarker.core.BuiltInsForOutputFormatRelated$AbstractConverterBI.calculateResult(BuiltInsForOutputFormatRelated.java:50)
 at 
freemarker.core.MarkupOutputFormatBoundBuiltIn._eval(MarkupOutputFormatBoundBuiltIn.java:40)
 at freemarker.core.Expression.eval(Expression.java:81) at 
freemarker.core.DollarVariable.calculateInterpolatedStringOrMarkup(DollarVariable.java:96)
 at freemarker.core.DollarVariable.accept(DollarVariable.java:59) at 
freemarker.core.Environment.visit(Environment.java:362) at 
freemarker.core.Environment.invoke(Environment.java:714) at 
freemarker.core.UnifiedCall.accept(UnifiedCall.java:83) at 
freemarker.core.Environment.visit(Environment.java:362) at 
freemarker.core.Environment.invoke(Environment.java:714) at 
freemarker.core.UnifiedCall.accept(UnifiedCall.java:83) at 
freemarker.core.Environment.visit(Environment.java:326) at 
freemarker.core.Environment.visit(Environment.java:332) at 
freemarker.core.Environment.process(Environment.java:305) at 
freemarker.template.Template.process(Template.java:378) at 
org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:143)
 at 
org.glassfish.jersey.server.mvc.freemarker.FreemarkerViewProcessor.writeTo(FreemarkerViewProcessor.java:85)
 at 
org.glassfish.jersey.server.mvc.spi.ResolvedViewable.writeTo(ResolvedViewable.java:116)
 at 
org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:134)
 at 
org.glassfish.jersey.server.mvc.internal.ViewableMessageBodyWriter.writeTo(ViewableMessageBodyWriter.java:88)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:263)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
 at 
org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
 at 

[jira] [Created] (DRILL-6630) Extra spaces are ignored while publishing results in Drill Web UI

2018-07-24 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6630:
---

 Summary: Extra spaces are ignored while publishing results in 
Drill Web UI
 Key: DRILL-6630
 URL: https://issues.apache.org/jira/browse/DRILL-6630
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


*Prerequisites:*
Use Drill Web UI to submit queries

*Query:*
{code:sql}
select '   sdssada' from (values(1))
{code}

*Expected Result:*
{noformat}
"  sdssada"
{noformat}

*Actual Result:*
{noformat}
"sds sada"
{noformat}

*Note:* Inspecting the element using Chrome Developer Tools you can see that it 
contain the real string. So something should be done with HTML formatting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6544) Timestamp value in Drill UI showed inconsistently with the same value retrieved from sqline

2018-06-27 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6544:
---

 Summary: Timestamp value in Drill UI showed inconsistently with 
the same value retrieved from sqline
 Key: DRILL-6544
 URL: https://issues.apache.org/jira/browse/DRILL-6544
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


*Query:*
{code:sql}
select timestamp '2008-2-23 12:23:34' from (values(1));
{code}

*Expected result (from sqline):*
2008-02-23 12:23:34.0

*Actual result (from Drill UI):*
2008-02-23T12:23:34



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6417) Project is not pushed into scan if use subquery with UNION operator

2018-05-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6417:
---

 Summary: Project is not pushed into scan if use subquery with 
UNION operator
 Key: DRILL-6417
 URL: https://issues.apache.org/jira/browse/DRILL-6417
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy


*Data:*
Use attached dataset

*Query:*
{code:sql}
explain plan for select id
from 
  (select id, part_col, int_col, bool_col, date_col, float_col, time_col, ts_col
   from dfs.tmp.`DRILL_3855_test_data`
   where part_col = 'Partition_one' or part_col = 'Partition_two'
   union
   select id, part_col, int_col, bool_col, date_col, float_col, time_col, ts_col
   from dfs.tmp.`DRILL_3855_test_data`
   where part_col = 'Partition_two' or part_col = 'Partition_three')
where int_col = 0
{code}

*Expected plan:*
{noformat}
Scan ... columns=[`part_col`, `id`, `int_col`]
{noformat}

*Actual plan:*
{noformat}
Scan ... columns=[`part_col`, `id`, `int_col`, `bool_col`, `date_col`, 
`float_col`, `time_col`, `ts_col`]
{noformat}

*Notes:*
Works as expected if change "union" to "union all"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6413) Specific query returns an exception if filter a boolean column by "equals" operator

2018-05-14 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6413:
---

 Summary: Specific query returns an exception if filter a boolean 
column by "equals" operator
 Key: DRILL-6413
 URL: https://issues.apache.org/jira/browse/DRILL-6413
 Project: Apache Drill
  Issue Type: Bug
Reporter: Anton Gozhiy
 Attachments: Test_data.tar.gz

*Data:*
Use the attached dataset

*Query:*
select *
from dfs.tmp.`Test_data`
where bool_col = true and part_col in ('Partition_two')

*Expected result:*
The query should return result normally

*Actual result:*
Exception happens:
{noformat}
Error: SYSTEM ERROR: ClassCastException: 
org.apache.drill.common.expression.TypedFieldExpr cannot be cast to 
org.apache.drill.exec.expr.stat.ParquetFilterPredicate
{noformat}

*Notes:*
It works OK if use "is" operator or if not use "*" in the select statement



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6342) Parquet filter pushdown doesn't work in case of filtering fields inside arrays of complex fields

2018-04-19 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6342:
---

 Summary: Parquet filter pushdown doesn't work in case of filtering 
fields inside arrays of complex fields
 Key: DRILL-6342
 URL: https://issues.apache.org/jira/browse/DRILL-6342
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.14.0
Reporter: Anton Gozhiy
 Attachments: Complex_data.tar.gz

*Data:*
 Complex_data data set is attached

*Query:*
{code:sql}
explain plan for select * from dfs.tmp.`Complex_data` t where 
t.list_of_complex_fields[2].nested_field is true
{code}

*Expected result:*
numFiles=2
Statistics of the file that should't be scanned:
{noformat}
list_of_complex_fields:
.nested_field:   BOOLEAN UNCOMPRESSED DO:0 FPO:497 
SZ:41/41/1.00 VC:3 ENC:PLAIN,RLE ST:[min: false, max: false, num_nulls: 0]
{noformat}

*Actual result:*
numFiles=3
I.e, filter pushdown is not work





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6250) Sqlline start command with password appears in the sqlline.log

2018-03-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6250:
---

 Summary: Sqlline start command with password appears in the 
sqlline.log
 Key: DRILL-6250
 URL: https://issues.apache.org/jira/browse/DRILL-6250
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy


*Prerequisites:*
 *1.* Log level is set to "all" in the conf/logback.xml:
{code:xml}




{code}
*2.* PLAIN authentication mechanism is configured:
{code:java}
  security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
  }
{code}
*Steps:*
 *1.* Start the drillbits
 *2.* Connect by sqlline:
{noformat}
/opt/mapr/drill/drill-1.13.0/bin/sqlline -u "jdbc:drill:zk=node1:5181;" -n 
user1 -p 1234
{noformat}
*3.* Use check the sqlline logs:
{noformat}
tail -F log/sqlline.log|grep 1234 -a5 -b5
{noformat}
*Expected result:* Logs shouldn't contain clear-text passwords

*Actual result:* The logs contain the sqlline start command with password:
{noformat}
# system properties
35333-"java" : {
35352-# system properties
35384:"command" : "sqlline.SqlLine -d org.apache.drill.jdbc.Driver 
--maxWidth=1 --color=true -u jdbc:drill:zk=node1:5181; -n user1 -p 1234",
35535-# system properties
35567-"launcher" : "SUN_STANDARD"
35607-}
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6219) Filter pushdown doesn't work with star operator if there is a subquery with it's own filter

2018-03-07 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6219:
---

 Summary: Filter pushdown doesn't work with star operator if there 
is a subquery with it's own filter
 Key: DRILL-6219
 URL: https://issues.apache.org/jira/browse/DRILL-6219
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy


*Data set:*
The data is generated used the attached file: *DRILL_6118_data_source.csv*
Data gen commands:

{code:sql}
create table dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders/d1` (c1, c2, 
c3, c4, c5) as select cast(columns[0] as int) c1, columns[1] c2, columns[2] c3, 
columns[3] c4, columns[4] c5 from dfs.tmp.`DRILL_6118_data_source.csv` where 
columns[0] in (1, 3);
create table dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders/d2` (c1, c2, 
c3, c4, c5) as select cast(columns[0] as int) c1, columns[1] c2, columns[2] c3, 
columns[3] c4, columns[4] c5 from dfs.tmp.`DRILL_6118_data_source.csv` where 
columns[0]=2;
create table dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders/d3` (c1, c2, 
c3, c4, c5) as select cast(columns[0] as int) c1, columns[1] c2, columns[2] c3, 
columns[3] c4, columns[4] c5 from dfs.tmp.`DRILL_6118_data_source.csv` where 
columns[0]>3;
{code}

*Steps:*
# Execute the following query:

{code:sql}
select * from (select * from 
dfs.drillTestDir.`DRILL_6118_parquet_partitioned_by_folders` where c1>2) where 
c1>3{code}

*Expected result:*
Filrers "c1>3" and "c1>2" should both be pushed down so only the data from the 
folder "d3" should be scanned.

*Actual result:* 
The data from the folders "d1" and  "d3" are being scanned so as only filter 
"c1>2" is pushed down

*Physical plan:*
{code}
00-00Screen : rowType = RecordType(DYNAMIC_STAR **): rowcount = 10.0, 
cumulative cost = {201.0 rows, 581.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id 
= 105545
00-01  Project(**=[$0]) : rowType = RecordType(DYNAMIC_STAR **): rowcount = 
10.0, cumulative cost = {200.0 rows, 580.0 cpu, 0.0 io, 0.0 network, 0.0 
memory}, id = 105544
00-02SelectionVectorRemover : rowType = RecordType(DYNAMIC_STAR 
T25¦¦**): rowcount = 10.0, cumulative cost = {190.0 rows, 570.0 cpu, 0.0 io, 
0.0 network, 0.0 memory}, id = 105543
00-03  Filter(condition=[>(ITEM($0, 'c1'), 3)]) : rowType = 
RecordType(DYNAMIC_STAR T25¦¦**): rowcount = 10.0, cumulative cost = {180.0 
rows, 560.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 105542
00-04Project(T25¦¦**=[$0]) : rowType = RecordType(DYNAMIC_STAR 
T25¦¦**): rowcount = 20.0, cumulative cost = {160.0 rows, 440.0 cpu, 0.0 io, 
0.0 network, 0.0 memory}, id = 105541
00-05  SelectionVectorRemover : rowType = RecordType(DYNAMIC_STAR 
T25¦¦**, ANY c1): rowcount = 20.0, cumulative cost = {140.0 rows, 420.0 cpu, 
0.0 io, 0.0 network, 0.0 memory}, id = 105540
00-06Filter(condition=[>($1, 2)]) : rowType = 
RecordType(DYNAMIC_STAR T25¦¦**, ANY c1): rowcount = 20.0, cumulative cost = 
{120.0 rows, 400.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 105539
00-07  Project(T25¦¦**=[$0], c1=[$1]) : rowType = 
RecordType(DYNAMIC_STAR T25¦¦**, ANY c1): rowcount = 40.0, cumulative cost = 
{80.0 rows, 160.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 105538
00-08Scan(groupscan=[ParquetGroupScan 
[entries=[ReadEntryWithPath 
[path=/drill/testdata/DRILL_6118_parquet_partitioned_by_folders/d1/0_0_0.parquet],
 ReadEntryWithPath 
[path=/drill/testdata/DRILL_6118_parquet_partitioned_by_folders/d3/0_0_0.parquet]],
 
selectionRoot=maprfs:/drill/testdata/DRILL_6118_parquet_partitioned_by_folders, 
numFiles=2, numRowGroups=2, usedMetadataFile=false, columns=[`**`]]]) : rowType 
= RecordType(DYNAMIC_STAR **, ANY c1): rowcount = 40.0, cumulative cost = {40.0 
rows, 80.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 105537
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6199) Filter push down doesn't work with more than one nested subqueries

2018-03-01 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6199:
---

 Summary: Filter push down doesn't work with more than one nested 
subqueries
 Key: DRILL-6199
 URL: https://issues.apache.org/jira/browse/DRILL-6199
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy
 Attachments: DRILL_6118_data_source.csv

*Data set:*
The data is generated used the attached file: *DRILL_6118_data_source.csv*
Data gen commands:

{code:sql}
create table dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders/d1` (c1, c2, 
c3, c4, c5) as select cast(columns[0] as int) c1, columns[1] c2, columns[2] c3, 
columns[3] c4, columns[4] c5 from dfs.tmp.`DRILL_6118_data_source.csv` where 
columns[0] in (1, 3);
create table dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders/d2` (c1, c2, 
c3, c4, c5) as select cast(columns[0] as int) c1, columns[1] c2, columns[2] c3, 
columns[3] c4, columns[4] c5 from dfs.tmp.`DRILL_6118_data_source.csv` where 
columns[0]=2;
create table dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders/d3` (c1, c2, 
c3, c4, c5) as select cast(columns[0] as int) c1, columns[1] c2, columns[2] c3, 
columns[3] c4, columns[4] c5 from dfs.tmp.`DRILL_6118_data_source.csv` where 
columns[0]>3;
{code}

*Steps:*
# Execute the following query:
{code:sql}
explain plan for select * from (select * from (select * from 
dfs.tmp.`DRILL_6118_parquet_partitioned_by_folders`)) where c1<3
{code}

*Expected result:*
numFiles=2, numRowGroups=2, only files from the folders d1 and d2 should be 
scanned.

*Actual result:*
Filter push down doesn't work:
numFiles=3, numRowGroups=3, scanning from all files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6185) Error is displaying while accessing query profiles via the Web-UI

2018-02-26 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6185:
---

 Summary: Error is displaying while accessing query profiles via 
the Web-UI
 Key: DRILL-6185
 URL: https://issues.apache.org/jira/browse/DRILL-6185
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy


*Steps:*
 # Execute the following query:

{code:sql}
show schemas;
{code}

# On the Web-UI, go to the Profiles tab
# Open the profile for the query you executed

*Expected result:* You can access to the profile entry

*Actual result:* Error is displayed:

{code:json}
{
  "errorMessage" : "1"
}
{code}

*Note:* This error doesn't happen with every query. For example, "select * from 
system.version" can be accessed without error, while "show tables", "use dfs", 
"alter sessions" etc end with this error.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6127) NullPointerException happens when submitting physical plan to the Hive storage plugin

2018-01-31 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6127:
---

 Summary: NullPointerException happens when submitting physical 
plan to the Hive storage plugin
 Key: DRILL-6127
 URL: https://issues.apache.org/jira/browse/DRILL-6127
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy


*Prerequisites:*
*1.* Create some test table in Hive:
{code:sql}
create external table if not exists hive_storage.test (key string, value 
string) stored as parquet
location '/hive_storage/test';
insert into table test values ("key", "value");
{code}
*2.* Hive plugin config:

{code:json}
{
  "type": "hive",
  "enabled": true,
  "configProps": {
"hive.metastore.uris": "thrift://localhost:9083",
"fs.default.name": "maprfs:///",
"hive.metastore.sasl.enabled": "false"
  }
}
{code}

*Steps:*
*1.* From the Drill web UI, run the following query:
{code:sql}
explain plan for select * from hive.hive_storage.`test`
{code}

*2.* Copy the json part of the plan
*3.* On the Query page set checkbox to the PHYSICAL
*4.* Submit the copied plan  

*Expected result:*
Drill should return normal result: "key", "value"

*Actual result:*
NPE happens:
{noformat}
[Error Id: 8b45c27e-bddd-4552-b7ea-e5af6f40866a on node1:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
NullPointerException


[Error Id: 8b45c27e-bddd-4552-b7ea-e5af6f40866a on node1:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:633)
 ~[drill-common-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:761)
 [drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.QueryStateProcessor.checkCommonStates(QueryStateProcessor.java:327)
 [drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.QueryStateProcessor.planning(QueryStateProcessor.java:223)
 [drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.QueryStateProcessor.moveToState(QueryStateProcessor.java:83)
 [drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:279) 
[drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[na:1.8.0_161]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_161]
Caused by: org.apache.drill.exec.work.foreman.ForemanSetupException: Failure 
while parsing physical plan.
at 
org.apache.drill.exec.work.foreman.Foreman.parseAndRunPhysicalPlan(Foreman.java:393)
 [drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:257) 
[drill-java-exec-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
... 3 common frames omitted
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Instantiation 
of [simple type, class org.apache.drill.exec.store.hive.HiveScan] value failed 
(java.lang.NullPointerException): null
 at [Source: { "head" : { "version" : 1, "generator" : { "type" : 
"ExplainHandler", "info" : "" }, "type" : "APACHE_DRILL_PHYSICAL", "options" : 
[ ], "queue" : 0, "hasResourcePlan" : false, "resultMode" : "EXEC" }, "graph" : 
[ { "pop" : "hive-scan", "@id" : 2, "userName" : "mapr", "hive-table" : { 
"table" : { "tableName" : "test", "dbName" : "hive_storage", "owner" : "mapr", 
"createTime" : 1517417959, "lastAccessTime" : 0, "retention" : 0, "sd" : { 
"location" : "maprfs:/hive_storage/test", "inputFormat" : 
"org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat", "outputFormat" 
: "org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat", 
"compressed" : false, "numBuckets" : -1, "serDeInfo" : { "name" : null, 
"serializationLib" : 
"org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe", "parameters" : { 
"serialization.format" : "1" } }, "sortCols" : [ ], "parameters" : { } }, 
"partitionKeys" : [ ], "parameters" : { "totalSize" : "0", "EXTERNAL" : "TRUE", 
"numRows" : "1", "rawDataSize" : "2", "COLUMN_STATS_ACCURATE" : "true", 
"numFiles" : "0", "transient_lastDdlTime" : "1517418363" }, "viewOriginalText" 
: null, "viewExpandedText" : null, "tableType" : "EXTERNAL_TABLE", 
"columnsCache" : { "keys" : [ [ { "name" : "key", "type" : "string", "comment" 
: null }, { "name" : "value", "type" : "string", "comment" : null } ] ] } }, 
"partitions" : null }, "columns" : [ "`key`", "`value`" ], "cost" : 0.0 }, { 
"pop" : "project", "@id" : 1, "exprs" : [ { "ref" : "`key`", "expr" : "`key`" 
}, { "ref" : "`value`", "expr" : "`value`" } ], "child" : 2, "outputProj" : 
true, "initialAllocation" : 100, "maxAllocation" : 100, 

[jira] [Created] (DRILL-6119) The OpenTSDP storage plugin is not included in the Drill distribution

2018-01-30 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6119:
---

 Summary: The OpenTSDP storage plugin is not included in the Drill 
distribution
 Key: DRILL-6119
 URL: https://issues.apache.org/jira/browse/DRILL-6119
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy


Steps:
 # Open the drillbit web UI ( [http://localhost:8047/] )
 # Navigate to the storage tab
 # Try to add new storage plugin with the following config:

{noformat}
{
  "type": "openTSDB",
  "connection": "http://localhost:4242;,
  "enabled": true
}
{noformat}

Expected result:
The plugin should be added and enabled successfully

Actual result:
Error displayed: "Please retry: error (invalid JSON mapping)". 
In the drillbit.log: 
{noformat}
com.fasterxml.jackson.databind.JsonMappingException: Could not resolve type id 
'openTSDB' into a subtype of [simple type, class 
org.apache.drill.common.logical.StoragePluginConfig]: known type ids = 
[InfoSchemaConfig, StoragePluginConfig, SystemTablePluginConfig, file, hbase, 
hive, jdbc, kafka, kudu, mock, mongo, named]
{noformat}
The jar file corresponding to the plugin is absent at the distribution jar 
folder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-6081) Duration of completed queries is continuously increasing.

2018-01-11 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-6081:
---

 Summary: Duration of completed queries is continuously increasing.
 Key: DRILL-6081
 URL: https://issues.apache.org/jira/browse/DRILL-6081
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: Anton Gozhiy


Steps:
1. Execute some query (for example "select * from sys.version")
2. Go to the profiles page: http://node1:8047/profiles
3. Open the details page for the query
4. Expand the duration section

Expected result:
The duration should be adequate and shouldn't be changed after the page reload

Actual result:
The duration is continuously increasing (You should reload page to notice that)
||Planning||Queued||Execution||Total||
|0.092 sec|0.012 sec|59 min 36.487 sec  |59 min 36.591 sec|

Workaround: none

Note:
The issue introduced after the following fix:
https://issues.apache.org/jira/browse/DRILL-5963



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)