[jira] [Created] (DRILL-5735) UI options grouping and filtering & Metrics hints
Muhammad Gelbana created DRILL-5735: --- Summary: UI options grouping and filtering & Metrics hints Key: DRILL-5735 URL: https://issues.apache.org/jira/browse/DRILL-5735 Project: Apache Drill Issue Type: Improvement Components: Web Server Affects Versions: 1.11.0, 1.10.0, 1.9.0 Reporter: Muhammad Gelbana I can think of some UI improvements that could make all the difference for users trying to optimize low-performing queries. h2. Options h3. Grouping We can organize the options to be grouped by their scope of effect, this will help users easily locate the options they may need to tune. h3. Filtering Since the options are a lot, we can add a filtering mechanism (i.e. string search or group\scope filtering) so the user can filter out the options he's not interested in. To provide more benefit than the grouping idea mentioned above, filtering may include keywords also and not just the option name, since the user may not be aware of the name of the option he's looking for. h2. Metrics I'm referring here to the metrics page and the query execution plan page that displays the overview section and major\minor fragments metrics. We can show hints for each metric such as: # What does it represent in more details. # What option\scope-of-options to tune (increase ? decrease ?) to improve the performance reported by this metric. # May be even provide a small dialog to quickly allow the modification of the related option(s) to that metric -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5718) java.lang.IllegalStateException: Memory was leaked by query
Muhammad Gelbana created DRILL-5718: --- Summary: java.lang.IllegalStateException: Memory was leaked by query Key: DRILL-5718 URL: https://issues.apache.org/jira/browse/DRILL-5718 Project: Apache Drill Issue Type: Bug Components: Execution - Flow, Execution - RPC Affects Versions: 1.11.0, 1.9.0 Environment: Linux iWebGelbanaDev 2.6.32-696.1.1.el6.x86_64 #1 SMP Tue Apr 11 17:13:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux 48 Cores 25 GB Heap 200 GB Direct memory Reporter: Muhammad Gelbana Configurations {noformat} planner.memory.max_query_memory_per_node: 17179869184 (16 GB) planner.width.max_per_node: 48 store.parquet.block-size: 134217728 (128 MB, this is the block size used to create the parquet files) {noformat} {noformat} Fragment 0:0 [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010] [BitServer-4] INFO org.apache.drill.exec.work.fragment.FragmentExecutor - 267104f2-e48d-1d66-63f4-387848c1ccf2:1:10: State change requested RUNNING --> CANCELLATION_REQUESTED org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: ChannelClosedException: Channel closed /127.0.0.1:31010 <--> /127.0.0.1:40404. Fragment 0:0 [Error Id: 05c39a1e-c8a8-4147-870f-e0cdbb454e53 on iWebStitchFixDev:31010] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550) at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295) at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264) at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.drill.exec.rpc.ChannelClosedException: Channel closed /127.0.0.1:31010 <--> /127.0.0.1:40404. at org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:164) at org.apache.drill.exec.rpc.RpcBus$ChannelClosedHandler.operationComplete(RpcBus.java:144) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406) at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82) at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:943) at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:592) at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:584) at io.netty.channel.DefaultChannelPipeline$HeadContext.close(DefaultChannelPipeline.java:1099) at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615) at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600) at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71) at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:615) at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:600) at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:466) at io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:187) at org.apache.drill.exec.rpc.BasicServer$LoggingReadTimeoutHandler.readTimedOut(BasicServer.java:122) at io.netty.handler.timeout.ReadTimeoutHandler$ReadTimeoutTask.run(ReadTimeoutHandler.java:212) at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ... 1 more Suppressed: org.apache.drill.exec.rpc.RpcException: Failure sending message. at org.apache.drill.exec.rpc.RpcBus.send(RpcBus.java:124) at org.apache.drill.exec.rpc.user.UserServer$BitToUserConnection.sendData(UserServer.java:173) at
[jira] [Created] (DRILL-5707) Non-scalar subquery fails the whole query if it's aggregate column has an alias
Muhammad Gelbana created DRILL-5707: --- Summary: Non-scalar subquery fails the whole query if it's aggregate column has an alias Key: DRILL-5707 URL: https://issues.apache.org/jira/browse/DRILL-5707 Project: Apache Drill Issue Type: Bug Components: Query Planning & Optimization, SQL Parser Affects Versions: 1.11.0, 1.9.0 Reporter: Muhammad Gelbana The following query can be handled by Drill {code:sql} SELECT b.marital_status, (SELECT SUM(position_id) FROM cp.`employee.json` a WHERE a.marital_status = b.marital_status ) AS max_a FROM cp.`employee.json` b {code} But if I add an alias to the aggregate fuction {code:sql} SELECT b.marital_status, (SELECT SUM(position_id) MY_ALIAS FROM cp.`employee.json` a WHERE a.marital_status = b.marital_status ) AS max_a FROM cp.`employee.json` b {code} Drill starts complaining that it can't handle non-scalar subqueries {noformat} org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION ERROR: Non-scalar sub-query used in an expression See Apache Drill JIRA: DRILL-1937 {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5695) INTERVAL DAY multiplication isn't supported
Muhammad Gelbana created DRILL-5695: --- Summary: INTERVAL DAY multiplication isn't supported Key: DRILL-5695 URL: https://issues.apache.org/jira/browse/DRILL-5695 Project: Apache Drill Issue Type: Bug Components: Execution - Data Types Affects Versions: 1.9.0 Reporter: Muhammad Gelbana I'm not sure if this is intended or a missing feature. The following query {code:sql} SELECT CUSTOM_DATE_TRUNC('day', CAST('1900-01-01' AS DATE) + CAST (NULL AS INTERVAL DAY) * INTERVAL '1' DAY) + 1 * INTERVAL '1' YEAR FROM `dfs`.`path_to_parquet` Calcs HAVING (COUNT(1) > 0) LIMIT 0 {code} {noformat} 2017-07-30 13:12:15,439 [268240ef-eeea-04e2-cca2-b95033061af5:foreman] INFO o.a.d.e.p.sql.TypeInferenceUtils - User Error Occurred org.apache.drill.common.exceptions.UserException: FUNCTION ERROR: * does not support operand types (INTERVAL_DAY_TIME,INTERVAL_DAY_TIME) [Error Id: 50c2bd86-332c-4569-a5a2-76193e7eca41 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543) ~[drill-common-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.TypeInferenceUtils.resolveDrillFuncHolder(TypeInferenceUtils.java:644) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.TypeInferenceUtils.access$1700(TypeInferenceUtils.java:57) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.TypeInferenceUtils$DrillDefaultSqlReturnTypeInference.inferReturnType(TypeInferenceUtils.java:260) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:468) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:435) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:507) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:493) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlOperator.constructArgTypeList(SqlOperator.java:581) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:240) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:222) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
[jira] [Created] (DRILL-5606) Some tests fail after creating a fresh clone
Muhammad Gelbana created DRILL-5606: --- Summary: Some tests fail after creating a fresh clone Key: DRILL-5606 URL: https://issues.apache.org/jira/browse/DRILL-5606 Project: Apache Drill Issue Type: Bug Components: Tools, Build & Test Environment: {noformat} $ uname -a Linux mg-mate 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 16.04.2 LTS Release:16.04 Codename: xenial $ java -version openjdk version "1.8.0_131" OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11) OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode) {noformat} Environment variables JAVA_HOME, JRE_HOME, JDK_HOME aren't configured. Java executable is found as the PATH environment variables links to it. I can provide more details if needed. Reporter: Muhammad Gelbana I cloned Drill from Github using this url: [https://github.com/apache/drill.git] and I didn't change the branch afterwards, so I'm using *master*. Afterwards, I ran the following command {noformat} mvn clean install {noformat} I attached the full log but here is a snippet indicating the failing tests: {noformat} Failed tests: TestExtendedTypes.checkReadWriteExtended:60 expected:<...ateDay" : "1997-07-1[6" }, "drill_timestamp" : { "$date" : "2009-02-23T08:00:00.000Z" }, "time" : { "$time" : "19:20:30.450Z" }, "interval" : { "$interval" : "PT26.400S" }, "integer" : { "$numberLong" : 4 }, "inner" : { "bin" : { "$binary" : "ZHJpbGw=" }, "drill_date" : { "$dateDay" : "1997-07-16]" }, "drill_...> but was:<...ateDay" : "1997-07-1[5" }, "drill_timestamp" : { "$date" : "2009-02-23T08:00:00.000Z" }, "time" : { "$time" : "19:20:30.450Z" }, "interval" : { "$interval" : "PT26.400S" }, "integer" : { "$numberLong" : 4 }, "inner" : { "bin" : { "$binary" : "ZHJpbGw=" }, "drill_date" : { "$dateDay" : "1997-07-15]" }, "drill_...> Tests in error: TestCastFunctions.testToDateForTimeStamp:79 » at position 0 column '`col`' mi... TestNewDateFunctions.testIsDate:61 » After matching 0 records, did not find e... Tests run: 2128, Failures: 1, Errors: 2, Skipped: 139 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Drill Root POM .. SUCCESS [ 19.805 s] [INFO] tools/Parent Pom ... SUCCESS [ 0.605 s] [INFO] tools/freemarker codegen tooling ... SUCCESS [ 7.077 s] [INFO] Drill Protocol . SUCCESS [ 7.959 s] [INFO] Common (Logical Plan, Base expressions) SUCCESS [ 7.734 s] [INFO] Logical Plan, Base expressions . SUCCESS [ 8.099 s] [INFO] exec/Parent Pom SUCCESS [ 0.575 s] [INFO] exec/memory/Parent Pom . SUCCESS [ 0.513 s] [INFO] exec/memory/base ... SUCCESS [ 4.666 s] [INFO] exec/rpc ... SUCCESS [ 2.684 s] [INFO] exec/Vectors ... SUCCESS [01:11 min] [INFO] contrib/Parent Pom . SUCCESS [ 0.547 s] [INFO] contrib/data/Parent Pom SUCCESS [ 0.496 s] [INFO] contrib/data/tpch-sample-data .. SUCCESS [ 2.698 s] [INFO] exec/Java Execution Engine . FAILURE [19:09 min] {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5583) Literal expression not handled
Muhammad Gelbana created DRILL-5583: --- Summary: Literal expression not handled Key: DRILL-5583 URL: https://issues.apache.org/jira/browse/DRILL-5583 Project: Apache Drill Issue Type: Bug Components: SQL Parser Affects Versions: 1.9.0 Reporter: Muhammad Gelbana The following query {code:sql} SELECT ((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 * 70 + 17)) `TEMP(Test)(64617177)(0)` FROM `dfs`.`path_to_parquet` Calcs GROUP BY ((UNIX_TIMESTAMP(Calcs.`date0`, '-MM-dd') / (60 * 60 * 24)) + (365 * 70 + 17)) {code} Throws the following exception {noformat} [Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010] org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: AssertionError: Internal error: invalid literal: 60 + 2 [Error Id: 5ee33c0f-9edc-43a0-8125-3e6499e72410 on mgelbana:31010] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543) ~[drill-common-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:825) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:935) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) [drill-java-exec-1.9.0.jar:1.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_131] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_131] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131] Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected exception during fragment initialization: Internal error: invalid literal: 60 + 2 ... 4 common frames omitted Caused by: java.lang.AssertionError: Internal error: invalid literal: 60 + 2 at org.apache.calcite.util.Util.newInternal(Util.java:777) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlLiteral.value(SqlLiteral.java:329) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlCallBinding.getOperandLiteralValue(SqlCallBinding.java:219) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlBinaryOperator.getMonotonicity(SqlBinaryOperator.java:188) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.SqlCallBinding.getOperandMonotonicity(SqlCallBinding.java:193) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.fun.SqlMonotonicBinaryOperator.getMonotonicity(SqlMonotonicBinaryOperator.java:59) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.drill.exec.planner.sql.DrillCalciteSqlOperatorWrapper.getMonotonicity(DrillCalciteSqlOperatorWrapper.java:107) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.calcite.sql.SqlCall.getMonotonicity(SqlCall.java:175) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql.validate.SelectScope.getMonotonicity(SelectScope.java:154) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql2rel.SqlToRelConverter.createAggImpl(SqlToRelConverter.java:2476) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql2rel.SqlToRelConverter.convertAgg(SqlToRelConverter.java:2374) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:603) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:564) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:2769) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:518) ~[calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19] at org.apache.drill.exec.planner.sql.SqlConverter.toRel(SqlConverter.java:263) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRel(DefaultSqlHandler.java:626) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:195) ~[drill-java-exec-1.9.0.jar:1.9.0]
[jira] [Created] (DRILL-5539) drillbit.sh script breaks if the working directory contains spaces
Muhammad Gelbana created DRILL-5539: --- Summary: drillbit.sh script breaks if the working directory contains spaces Key: DRILL-5539 URL: https://issues.apache.org/jira/browse/DRILL-5539 Project: Apache Drill Issue Type: Bug Environment: Linux Reporter: Muhammad Gelbana The following output occurred when we tried running the drillbit.sh script in a path that contains spaces: */home/folder1/Folder Name/drill/bin* {noformat} [mgelbana@regression-sysops bin]$ ./drillbit.sh start ./drillbit.sh: line 114: [: /home/folder1/Folder: binary operator expected Starting drillbit, logging to /home/folder1/Folder Name/drill/log/drillbit.out ./drillbit.sh: line 147: $pid: ambiguous redirect [mgelbana@regression-sysops bin]$ pwd /home/folder1/Folder Name/drill/bin {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5515) "IS NO DISTINCT FROM" and it's equivalent form aren't handled likewise
Muhammad Gelbana created DRILL-5515: --- Summary: "IS NO DISTINCT FROM" and it's equivalent form aren't handled likewise Key: DRILL-5515 URL: https://issues.apache.org/jira/browse/DRILL-5515 Project: Apache Drill Issue Type: Bug Components: Query Planning & Optimization Affects Versions: 1.10.0, 1.9.0 Reporter: Muhammad Gelbana The following query fails to execute {code:sql}SELECT * FROM (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t0` INNER JOIN (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t1` ON (`t0`.`UserID` IS NOT DISTINCT FROM `t1`.`UserID`){code} and produces the following error message {noformat}org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION ERROR: This query cannot be planned possibly due to either a cartesian join or an inequality join [Error Id: 0bd41e06-ccd7-45d6-a038-3359bf5a4a7f on mgelbana-incorta:31010]{noformat} While the query's equivalent form runs fine {code:sql}SELECT * FROM (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t0` INNER JOIN (SELECT `UserID` FROM `dfs`.`path_ot_parquet` tc) `t1` ON (`t0`.`UserID` = `t1`.`UserID` OR (`t0`.`UserID` IS NULL AND `t1`.`UserID` IS NULL)){code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5452) Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled
Muhammad Gelbana created DRILL-5452: --- Summary: Join query cannot be planned although all joins are enabled and "planner.enable_nljoin_for_scalar_only" is disabled Key: DRILL-5452 URL: https://issues.apache.org/jira/browse/DRILL-5452 Project: Apache Drill Issue Type: Bug Components: Query Planning & Optimization Affects Versions: 1.10.0, 1.9.0 Reporter: Muhammad Gelbana The following query {code:sql} SELECT * FROM (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file tc LIMIT 2147483647) `t0` INNER JOIN (SELECT 'ABC' `UserID` FROM `dfs`.`path_to_parquet_file` tc LIMIT 2147483647) `t1` ON (`t0`.`UserID` IS NOT DISTINCT FROM `t1`.`UserID`) LIMIT 2147483647{code} Leads to the following exception {preformatted}2017-04-28 16:59:11,722 [26fca73f-92f0-4664-4dca-88bc48265c92:foreman] INFO o.a.d.e.planner.sql.DrillSqlWorker - User Error Occurred org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION ERROR: This query cannot be planned possibly due to either a cartesian join or an inequality join [Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 ] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543) ~[drill-common-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:107) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) [drill-java-exec-1.9.0.jar:1.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121] Caused by: org.apache.drill.exec.work.foreman.UnsupportedRelOperatorException: This query cannot be planned possibly due to either a cartesian join or an inequality join at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToPrel(DefaultSqlHandler.java:432) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:123) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:97) [drill-java-exec-1.9.0.jar:1.9.0] ... 5 common frames omitted 2017-04-28 16:59:11,741 [USER-rpc-event-queue] ERROR o.a.d.exec.server.rest.QueryWrapper - Query Failed org.apache.drill.common.exceptions.UserRemoteException: UNSUPPORTED_OPERATION ERROR: This query cannot be planned possibly due to either a cartesian join or an inequality join [Error Id: 672b4f2c-02a3-4004-af4b-279759c36c96 on mgelbana-incorta:31010] at org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:144) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:65) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:363) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:240) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) [drill-rpc-1.9.0.jar:1.9.0] at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:245) [drill-rpc-1.9.0.jar:1.9.0] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) [netty-codec-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [netty-transport-4.0.27.Final.jar:4.0.27.Final] at
[jira] [Created] (DRILL-5393) ALTER SESSION documentation page broken link
Muhammad Gelbana created DRILL-5393: --- Summary: ALTER SESSION documentation page broken link Key: DRILL-5393 URL: https://issues.apache.org/jira/browse/DRILL-5393 Project: Apache Drill Issue Type: Bug Components: Documentation Reporter: Muhammad Gelbana On [this page|https://drill.apache.org/docs/modifying-query-planning-options/], there is a link to the ALTER SESSION documentation page which points to this broken link: https://drill.apache.org/docs/alter-session/ I believe the correct link should be: https://drill.apache.org/docs/set/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5300) SYSTEM ERROR: IllegalStateException: Memory was leaked by query while querying parquet files
Muhammad Gelbana created DRILL-5300: --- Summary: SYSTEM ERROR: IllegalStateException: Memory was leaked by query while querying parquet files Key: DRILL-5300 URL: https://issues.apache.org/jira/browse/DRILL-5300 Project: Apache Drill Issue Type: Bug Affects Versions: 1.9.0 Environment: OS: Linux Reporter: Muhammad Gelbana Attachments: both_queries_logs.zip Running the following query against parquet files (I modified some values for privacy reasons) {code:title=Query causing the long logs|borderStyle=solid} SELECT AL4.NAME, AL5.SEGMENT2, SUM(AL1.AMOUNT), AL2.ATTRIBUTE4, AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, AL11.NAME FROM dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA__TRX_LINE_GL_DIST_ALL` AL1, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_XX/RA_OMER_TRX_ALL` AL2, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX` AL3, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_HR_COMMON/HR_ALL_ORGANIZATION_UNITS` AL4, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_CODE_COMBINATIONS` AL5, dfs.`/disk2/XXX/XXX//data/../parquet//XXAT_AR_MU_TAB` AL8, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_FIN_COMMON/GL_XXX` AL11, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S` AL12, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS` AL13, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL` AL14, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL` AL15, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___S_ALL` AL16, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX___USES_ALL` AL17, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_LOCATIONS` AL18, dfs.`/disk2/XXX/XXX//data/../parquet/XXX_X_COMMON/XX_X_S` AL19 WHERE (AL2.SHIP_TO__USE_ID = AL15._USE_ID AND AL15.___ID = AL14.___ID AND AL14.X__ID = AL12.X__ID AND AL12.LOCATION_ID = AL13.LOCATION_ID AND AL17.___ID = AL16.___ID AND AL16.X__ID = AL19.X__ID AND AL19.LOCATION_ID = AL18.LOCATION_ID AND AL2.BILL_TO__USE_ID = AL17._USE_ID AND AL2.SET_OF_X_ID = AL3.SET_OF_X_ID AND AL1.CODE_COMBINATION_ID = AL5.CODE_COMBINATION_ID AND AL5.SEGMENT4 = AL8.MU AND AL1.SET_OF_X_ID = AL11.SET_OF_X_ID AND AL2.ORG_ID = AL4.ORGANIZATION_ID AND AL2.OMER_TRX_ID = AL1.OMER_TRX_ID) AND ((AL5.SEGMENT2 = '41' AND AL1.AMOUNT <> 0 AND AL4.NAME IN ('XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-', 'XXX-XX-') AND AL3.NAME like '%-PR-%')) GROUP BY AL4.NAME, AL5.SEGMENT2, AL2.ATTRIBUTE4, AL2.XXX__CODE, AL8.D_BU, AL8.F_PL, AL18.COUNTRY, AL13.COUNTRY, AL11.NAME {code} {code:title=Query causing the short logs|borderStyle=solid} SELECT AL11.NAME FROM dfs.`/XXX/XXX/XXX/data/../parquet/XXX_XXX_COMMON/GL_XXX` LIMIT 10 {code} This issue may be a duplicate for [this one|https://issues.apache.org/jira/browse/DRILL-4398] but I created a new one based on [this suggestion|https://issues.apache.org/jira/browse/DRILL-4398?focusedCommentId=15884846=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15884846]. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (DRILL-5197) CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL]
Muhammad Gelbana created DRILL-5197: --- Summary: CASE statement fails due to error: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL] Key: DRILL-5197 URL: https://issues.apache.org/jira/browse/DRILL-5197 Project: Apache Drill Issue Type: Bug Components: Execution - Data Types Affects Versions: 1.9.0 Reporter: Muhammad Gelbana The following query fails for no obvious reason {code:sql} SELECT CASE WHEN `tname`.`full_name` = 'ABC' THEN ( CASE WHEN `tname`.`full_name` = 'ABC' THEN ( CASE WHEN `tname`.`full_name` = ' ' THEN ( CASE WHEN `tname`.`full_name` = 'ABC' THEN `tname`.`full_name` ELSE NULL END ) ELSE NULL END ) ELSE NULL END ) WHEN `tname`.`full_name` = 'ABC' THEN NULL ELSE NULL END FROM cp.`employee.json` `tname` {code} If the `THEN `tname`.`full_name`` statements is changed to `THEN 'ABC'` the error does not occur. Thrown exception {quote} [Error Id: e75fd0fe-132b-4eb4-b2e8-7b34dc39657e on mgelbana-incorta:31010] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543) ~[drill-common-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262) [drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.9.0.jar:1.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_111] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111] Caused by: java.lang.UnsupportedOperationException: Unable to get value vector class for minor type [NULL] and mode [OPTIONAL] at org.apache.drill.exec.expr.BasicTypeHelper.getValueVectorClass(BasicTypeHelper.java:441) ~[vector-1.9.0.jar:1.9.0] at org.apache.drill.exec.record.VectorContainer.addOrGet(VectorContainer.java:123) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:463) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232) ~[drill-java-exec-1.9.0.jar:1.9.0] at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226) ~[drill-java-exec-1.9.0.jar:1.9.0] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_111] at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_111] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) ~[hadoop-common-2.7.1.jar:na] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226) [drill-java-exec-1.9.0.jar:1.9.0] ... 4 common frames omitted {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-5193) UDF returns NULL as expected only if the input is a literal
Muhammad Gelbana created DRILL-5193: --- Summary: UDF returns NULL as expected only if the input is a literal Key: DRILL-5193 URL: https://issues.apache.org/jira/browse/DRILL-5193 Project: Apache Drill Issue Type: Bug Components: Functions - Drill Affects Versions: 1.9.0 Reporter: Muhammad Gelbana I defined the following UDF {code:title=SplitPartFunc.java|borderStyle=solid} import javax.inject.Inject; import org.apache.drill.exec.expr.DrillSimpleFunc; import org.apache.drill.exec.expr.annotations.FunctionTemplate; import org.apache.drill.exec.expr.annotations.Output; import org.apache.drill.exec.expr.annotations.Param; import org.apache.drill.exec.expr.holders.IntHolder; import org.apache.drill.exec.expr.holders.NullableVarCharHolder; import org.apache.drill.exec.expr.holders.VarCharHolder; import io.netty.buffer.DrillBuf; @FunctionTemplate(name = "split_string", scope = FunctionTemplate.FunctionScope.SIMPLE, nulls = FunctionTemplate.NullHandling.NULL_IF_NULL) public class SplitPartFunc implements DrillSimpleFunc { @Param VarCharHolder input; @Param(constant = true) VarCharHolder delimiter; @Param(constant = true) IntHolder field; @Output NullableVarCharHolder out; @Inject DrillBuf buffer; public void setup() { } public void eval() { String stringValue = org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start, input.end, input.buffer); out.buffer = buffer; //If I return before this statement, a NPE is thrown :( if(stringValue == null){ return; } int fieldValue = field.value; if(fieldValue <= 0){ return; } String delimiterValue = org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start, delimiter.end, delimiter.buffer); if(delimiterValue == null){ return; } String[] splittedInput = stringValue.split(delimiterValue); if(splittedInput.length < fieldValue){ return; } // put the output value in the out buffer String outputValue = splittedInput[fieldValue - 1]; out.start = 0; out.end = outputValue.getBytes().length; buffer.setBytes(0, outputValue.getBytes()); out.isSet = 1; } } {code} If I run the following query on the sample employees.json file (or actually a parquet, after modifying the table and columns names) {code:title=SQL Query|borderStyle=solid}SELECT full_name, split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM cp.employee.json LIMIT 1{code} I get the following result !https://i.stack.imgur.com/L8uQW.png! Shouldn't I be getting the column value and null for the other 2 columns ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (DRILL-5194) UDF returns NULL as expected only if the input is a literal
Muhammad Gelbana created DRILL-5194: --- Summary: UDF returns NULL as expected only if the input is a literal Key: DRILL-5194 URL: https://issues.apache.org/jira/browse/DRILL-5194 Project: Apache Drill Issue Type: Bug Components: Functions - Drill Affects Versions: 1.9.0 Reporter: Muhammad Gelbana I defined the following UDF {code:title=SplitPartFunc.java|borderStyle=solid} import javax.inject.Inject; import org.apache.drill.exec.expr.DrillSimpleFunc; import org.apache.drill.exec.expr.annotations.FunctionTemplate; import org.apache.drill.exec.expr.annotations.Output; import org.apache.drill.exec.expr.annotations.Param; import org.apache.drill.exec.expr.holders.IntHolder; import org.apache.drill.exec.expr.holders.NullableVarCharHolder; import org.apache.drill.exec.expr.holders.VarCharHolder; import io.netty.buffer.DrillBuf; @FunctionTemplate(name = "split_string", scope = FunctionTemplate.FunctionScope.SIMPLE, nulls = FunctionTemplate.NullHandling.NULL_IF_NULL) public class SplitPartFunc implements DrillSimpleFunc { @Param VarCharHolder input; @Param(constant = true) VarCharHolder delimiter; @Param(constant = true) IntHolder field; @Output NullableVarCharHolder out; @Inject DrillBuf buffer; public void setup() { } public void eval() { String stringValue = org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(input.start, input.end, input.buffer); out.buffer = buffer; //If I return before this statement, a NPE is thrown :( if(stringValue == null){ return; } int fieldValue = field.value; if(fieldValue <= 0){ return; } String delimiterValue = org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.toStringFromUTF8(delimiter.start, delimiter.end, delimiter.buffer); if(delimiterValue == null){ return; } String[] splittedInput = stringValue.split(delimiterValue); if(splittedInput.length < fieldValue){ return; } // put the output value in the out buffer String outputValue = splittedInput[fieldValue - 1]; out.start = 0; out.end = outputValue.getBytes().length; buffer.setBytes(0, outputValue.getBytes()); out.isSet = 1; } } {code} If I run the following query on the sample employees.json file (or actually a parquet, after modifying the table and columns names) {code:title=SQL Query|borderStyle=solid}SELECT full_name, split_string(full_name, ' ', 4), split_string('Whatever', ' ', 4) FROM cp.employee.json LIMIT 1{code} I get the following result !https://i.stack.imgur.com/L8uQW.png! Shouldn't I be getting the column value and null for the other 2 columns ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)