[jira] [Commented] (DRILL-4354) Remove sessions in anonymous (user auth disabled) mode in WebUI server

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168347#comment-15168347
 ] 

ASF GitHub Bot commented on DRILL-4354:
---

Github user sudheeshkatkam commented on the pull request:

https://github.com/apache/drill/pull/360#issuecomment-189091692
  
+1

I have two minor comments. Looks like DRILL-4353 is already committed.


> Remove sessions in anonymous (user auth disabled) mode in WebUI server
> --
>
> Key: DRILL-4354
> URL: https://issues.apache.org/jira/browse/DRILL-4354
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 1.6.0
>
>
> Currently we open anonymous sessions when user auth disabled. These sessions 
> are cleaned up when they expire (controlled by boot config 
> {{drill.exec.http.session_max_idle_secs}}). This may lead to unnecessary 
> resource accumulation. This JIRA is to remove anonymous sessions and only 
> have sessions when user authentication is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4354) Remove sessions in anonymous (user auth disabled) mode in WebUI server

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168346#comment-15168346
 ] 

ASF GitHub Bot commented on DRILL-4354:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/360#discussion_r54200836
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/auth/DrillUserPrincipal.java
 ---
@@ -60,13 +63,21 @@ public String getName() {
   }
 
   /**
-   * @return Return {@link DrillClient} instanced with credentials of this 
user principal.
+   * @return Return {@link DrillClient} instanced with credentials of this 
user principal. Returned {@link DrillClient}
+   * must be returned using {@link #recycleDrillClient(DrillClient)} for 
proper resource cleanup.
*/
-  public DrillClient getDrillClient() {
+  public DrillClient getDrillClient() throws IOException {
 return drillClient;
   }
 
   /**
+   *
--- End diff --

missing?


> Remove sessions in anonymous (user auth disabled) mode in WebUI server
> --
>
> Key: DRILL-4354
> URL: https://issues.apache.org/jira/browse/DRILL-4354
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 1.6.0
>
>
> Currently we open anonymous sessions when user auth disabled. These sessions 
> are cleaned up when they expire (controlled by boot config 
> {{drill.exec.http.session_max_idle_secs}}). This may lead to unnecessary 
> resource accumulation. This JIRA is to remove anonymous sessions and only 
> have sessions when user authentication is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4354) Remove sessions in anonymous (user auth disabled) mode in WebUI server

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168345#comment-15168345
 ] 

ASF GitHub Bot commented on DRILL-4354:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/360#discussion_r54200801
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/DrillRestServer.java
 ---
@@ -102,4 +112,34 @@ public void dispose(DrillUserPrincipal principal) {
   // No-Op
 }
   }
+
+  // Provider which creates and cleanups DrillUserPrincipal for anonymous 
(auth disabled) mode
+  public static class AnonDrillUserPrincipalProvider implements 
Factory {
+@Inject WorkManager workManager;
+
+@RequestScoped
+@Override
+public DrillUserPrincipal provide() {
+  return new AnonDrillUserPrincipal(workManager.getContext());
+}
+
+@Override
+public void dispose(DrillUserPrincipal principal) {
+  // If this worked it would have been clean to free the resources 
here, but there are various scenarios
--- End diff --

Any [specific tickets](https://java.net/jira/browse/JERSEY/)?


> Remove sessions in anonymous (user auth disabled) mode in WebUI server
> --
>
> Key: DRILL-4354
> URL: https://issues.apache.org/jira/browse/DRILL-4354
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 1.6.0
>
>
> Currently we open anonymous sessions when user auth disabled. These sessions 
> are cleaned up when they expire (controlled by boot config 
> {{drill.exec.http.session_max_idle_secs}}). This may lead to unnecessary 
> resource accumulation. This JIRA is to remove anonymous sessions and only 
> have sessions when user authentication is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4434) Remove (or deprecate) GroupScan.enforceWidth and use GroupScan.getMinParallelization

2016-02-25 Thread Venki Korukanti (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venki Korukanti resolved DRILL-4434.

   Resolution: Fixed
 Assignee: Venki Korukanti
Fix Version/s: 1.6.0

> Remove (or deprecate) GroupScan.enforceWidth and use 
> GroupScan.getMinParallelization
> 
>
> Key: DRILL-4434
> URL: https://issues.apache.org/jira/browse/DRILL-4434
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
> Fix For: 1.6.0
>
>
> It seems like enforceWidth is not necessary which is used only in 
> ExcessibleExchangeRemover. Instead we should rely on 
> GroupScan.getMinParallelization().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4434) Remove (or deprecate) GroupScan.enforceWidth and use GroupScan.getMinParallelization

2016-02-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168299#comment-15168299
 ] 

ASF GitHub Bot commented on DRILL-4434:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/390


> Remove (or deprecate) GroupScan.enforceWidth and use 
> GroupScan.getMinParallelization
> 
>
> Key: DRILL-4434
> URL: https://issues.apache.org/jira/browse/DRILL-4434
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Venki Korukanti
>
> It seems like enforceWidth is not necessary which is used only in 
> ExcessibleExchangeRemover. Instead we should rely on 
> GroupScan.getMinParallelization().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4441) IN operator does not work with Avro reader

2016-02-25 Thread Jacques Nadeau (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168196#comment-15168196
 ] 

Jacques Nadeau commented on DRILL-4441:
---

I'm guessing the issue is a lack of correct handling of VARCHAR(*) where we 
default to VARCHAR(1).

> IN operator does not work with Avro reader
> --
>
> Key: DRILL-4441
> URL: https://issues.apache.org/jira/browse/DRILL-4441
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.5.0
> Environment: Ubuntu
>Reporter: Stefán Baxter
> Fix For: 1.6.0
>
>
> IN operator simply does not work. 
> (And I find it interesting that Storage-Avro is not available here in Jira as 
> a Storage component)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4387) Improve execution side when it handles skipAll query

2016-02-25 Thread Suresh Ollala (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Ollala updated DRILL-4387:
-
Reviewer: Victoria Markman

> Improve execution side when it handles skipAll query
> 
>
> Key: DRILL-4387
> URL: https://issues.apache.org/jira/browse/DRILL-4387
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jinfeng Ni
>Assignee: Jinfeng Ni
> Fix For: 1.6.0
>
>
> DRILL-4279 changes the planner side and the RecordReader in the execution 
> side when they handles skipAll query. However, it seems there are other 
> places in the codebase that do not handle skipAll query efficiently. In 
> particular, in GroupScan or ScanBatchCreator, we will replace a NULL or empty 
> column list with star column. This essentially will force the execution side 
> (RecordReader) to fetch all the columns for data source. Such behavior will 
> lead to big performance overhead for the SCAN operator.
> To improve Drill's performance, we should change those places as well, as a 
> follow-up work after DRILL-4279.
> One simple example of this problem is:
> {code}
>SELECT DISTINCT substring(dir1, 5) from  dfs.`/Path/To/ParquetTable`;  
> {code}
> The query does not require any regular column from the parquet file. However, 
> ParquetRowGroupScan and ParquetScanBatchCreator will put star column as the 
> column list. In case table has dozens or hundreds of columns, this will make 
> SCAN operator much more expensive than necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4442) Improve VectorAccessible and RecordBatch interfaces to provide only necessary information to the correct consumers

2016-02-25 Thread Jason Altekruse (JIRA)
Jason Altekruse created DRILL-4442:
--

 Summary: Improve VectorAccessible and RecordBatch interfaces to 
provide only necessary information to the correct consumers
 Key: DRILL-4442
 URL: https://issues.apache.org/jira/browse/DRILL-4442
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Jason Altekruse
Assignee: Jason Altekruse


During the creation of the operator test framework I ran into a small snag 
trying to share code between the existing test infrastructure and the new 
features to allow directly consuming the output of an operator rather than that 
of a query.

I needed to move the getSelectionVector2 and getSelectionVector4 methods up to 
the VectorAccessible interface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4041) Parquet library update causing random "Buffer has negative reference count"

2016-02-25 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168009#comment-15168009
 ] 

Rahul Challapalli commented on DRILL-4041:
--

We used to hit this error when we connected to a single drillbit while running 
the regression tests. Now on all our regression clusters we are talking to 
zookeeper instead of a single drillbit and we are no longer seeing this issue.

We can only close this issue if we have multiple clean regression runs by 
connecting directly to a single drillbit

> Parquet library update causing random "Buffer has negative reference count"
> ---
>
> Key: DRILL-4041
> URL: https://issues.apache.org/jira/browse/DRILL-4041
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Affects Versions: 1.3.0
>Reporter: Rahul Challapalli
>Assignee: Steven Phillips
>Priority: Critical
>
> git commit # 39582bd60c9e9b16aba4f099d434e927e7e5
> After the parquet library update commit, we started seeing the below error 
> randomly causing failures in the  Extended Functional Suite.
> {code}
> Failed with exception
> java.lang.IllegalArgumentException: Buffer has negative reference count.
>   at 
> oadd.com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:250)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:259)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:259)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:259)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:239)
>   at 
> oadd.org.apache.drill.exec.vector.BaseDataValueVector.clear(BaseDataValueVector.java:39)
>   at 
> oadd.org.apache.drill.exec.vector.NullableIntVector.clear(NullableIntVector.java:150)
>   at 
> oadd.org.apache.drill.exec.record.SimpleVectorWrapper.clear(SimpleVectorWrapper.java:84)
>   at 
> oadd.org.apache.drill.exec.record.VectorContainer.zeroVectors(VectorContainer.java:312)
>   at 
> oadd.org.apache.drill.exec.record.VectorContainer.clear(VectorContainer.java:296)
>   at 
> oadd.org.apache.drill.exec.record.RecordBatchLoader.clear(RecordBatchLoader.java:183)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.cleanup(DrillResultSetImpl.java:139)
>   at org.apache.drill.jdbc.impl.DrillCursor.close(DrillCursor.java:333)
>   at 
> oadd.net.hydromatic.avatica.AvaticaResultSet.close(AvaticaResultSet.java:110)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.close(DrillResultSetImpl.java:169)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:233)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:89)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4436) Result data gets mixed up when various tables have a column "label"

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4436:
--
Assignee: Taras Supyk

> Result data gets mixed up when various tables have a column "label"
> ---
>
> Key: DRILL-4436
> URL: https://issues.apache.org/jira/browse/DRILL-4436
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - JDBC
>Affects Versions: 1.5.0
> Environment: Drill 1.5.0 with Zookeeper on CentOS 7.0 
>Reporter: Vincent Uribe
>Assignee: Taras Supyk
>
> We have two tables in a MySQL database:
> CREATE TABLE `Gender` (
>   `genderId` bigint(20) NOT NULL AUTO_INCREMENT,
>   `label` varchar(15) NOT NULL,
>   PRIMARY KEY (`genderId`)
> ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
> CREATE TABLE `Civility` (
>   `civilityId` bigint(20) NOT NULL AUTO_INCREMENT,
>   `abbreviation` varchar(15) NOT NULL,
>   `label` varchar(60) DEFAULT NULL
>   PRIMARY KEY (`civilityId`)
> ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=latin1;
> With a query on these two tables with Gender.label as 'gender' and 
> Civility.label as 'civility', we obtain, depending of the query :
> * gender in civility
> * civility in the gender
> * NULL in the other column (gender or civility)
> if we drop the table Gender and recreate it with like this:
> CREATE TABLE `Gender` (
>   `genderId` bigint(20) NOT NULL AUTO_INCREMENT,
>   `label2` varchar(15) NOT NULL,
>   PRIMARY KEY (`genderId`)
> ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
> Everything is fine.
> I guess something is wrong with the metadata...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-3488) Allow Java 1.8

2016-02-25 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim updated DRILL-3488:

Assignee: Hanifi Gunes  (was: Deneche A. Hakim)

> Allow Java 1.8
> --
>
> Key: DRILL-3488
> URL: https://issues.apache.org/jira/browse/DRILL-3488
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Tools, Build & Test
>Reporter: Andrew
>Assignee: Hanifi Gunes
>Priority: Trivial
> Fix For: 1.6.0
>
> Attachments: DRILL-3488.1.patch.txt
>
>
> From my limited testing it seems that Drill works well with either Java 1.7 
> or 1.8. I'd like to change the top-level pom to allow 1.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4436) Result data gets mixed up when various tables have a column "label"

2016-02-25 Thread Jason Altekruse (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Altekruse updated DRILL-4436:
---
Component/s: Storage - JDBC

> Result data gets mixed up when various tables have a column "label"
> ---
>
> Key: DRILL-4436
> URL: https://issues.apache.org/jira/browse/DRILL-4436
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - JDBC
>Affects Versions: 1.5.0
> Environment: Drill 1.5.0 with Zookeeper on CentOS 7.0 
>Reporter: Vincent Uribe
>
> We have two tables in a MySQL database:
> CREATE TABLE `Gender` (
>   `genderId` bigint(20) NOT NULL AUTO_INCREMENT,
>   `label` varchar(15) NOT NULL,
>   PRIMARY KEY (`genderId`)
> ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
> CREATE TABLE `Civility` (
>   `civilityId` bigint(20) NOT NULL AUTO_INCREMENT,
>   `abbreviation` varchar(15) NOT NULL,
>   `label` varchar(60) DEFAULT NULL
>   PRIMARY KEY (`civilityId`)
> ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=latin1;
> With a query on these two tables with Gender.label as 'gender' and 
> Civility.label as 'civility', we obtain, depending of the query :
> * gender in civility
> * civility in the gender
> * NULL in the other column (gender or civility)
> if we drop the table Gender and recreate it with like this:
> CREATE TABLE `Gender` (
>   `genderId` bigint(20) NOT NULL AUTO_INCREMENT,
>   `label2` varchar(15) NOT NULL,
>   PRIMARY KEY (`genderId`)
> ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
> Everything is fine.
> I guess something is wrong with the metadata...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4441) IN operator does not work with Avro reader

2016-02-25 Thread JIRA
Stefán Baxter created DRILL-4441:


 Summary: IN operator does not work with Avro reader
 Key: DRILL-4441
 URL: https://issues.apache.org/jira/browse/DRILL-4441
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Other
Affects Versions: 1.5.0
 Environment: Ubuntu
Reporter: Stefán Baxter
 Fix For: 1.6.0


IN operator simply does not work. 

(And I find it interesting that Storage-Avro is not available here in Jira as a 
Storage component)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4441) IN operator does not work with Avro reader

2016-02-25 Thread JIRA

[ 
https://issues.apache.org/jira/browse/DRILL-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167846#comment-15167846
 ] 

Stefán Baxter commented on DRILL-4441:
--

This query targets Avro files in the latest 1.5 release:

0: jdbc:drill:zk=local> select count(*) from 
dfs.asa.`/streaming/venuepoint/transactions/` as s where s.sold_to = 
'Customer/4-2492847';
+-+
| EXPR$0  |
+-+
| 5788|
+-+

0: jdbc:drill:zk=local> select count(*) from 
dfs.asa.`/streaming/venuepoint/transactions/` as s where s.sold_to IN 
('Customer/4-2492847');
+-+
| EXPR$0  |
+-+
| 0   |
+-+

It shows that the IN operator does not work with Avro (works with Parquet).


> IN operator does not work with Avro reader
> --
>
> Key: DRILL-4441
> URL: https://issues.apache.org/jira/browse/DRILL-4441
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Affects Versions: 1.5.0
> Environment: Ubuntu
>Reporter: Stefán Baxter
> Fix For: 1.6.0
>
>
> IN operator simply does not work. 
> (And I find it interesting that Storage-Avro is not available here in Jira as 
> a Storage component)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4041) Parquet library update causing random "Buffer has negative reference count"

2016-02-25 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167751#comment-15167751
 ] 

Zelaine Fong commented on DRILL-4041:
-

[~sphillips], [~rkins] - Is this still an open issue?

> Parquet library update causing random "Buffer has negative reference count"
> ---
>
> Key: DRILL-4041
> URL: https://issues.apache.org/jira/browse/DRILL-4041
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Affects Versions: 1.3.0
>Reporter: Rahul Challapalli
>Assignee: Steven Phillips
>Priority: Critical
>
> git commit # 39582bd60c9e9b16aba4f099d434e927e7e5
> After the parquet library update commit, we started seeing the below error 
> randomly causing failures in the  Extended Functional Suite.
> {code}
> Failed with exception
> java.lang.IllegalArgumentException: Buffer has negative reference count.
>   at 
> oadd.com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:250)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:259)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:259)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:259)
>   at oadd.io.netty.buffer.DrillBuf.release(DrillBuf.java:239)
>   at 
> oadd.org.apache.drill.exec.vector.BaseDataValueVector.clear(BaseDataValueVector.java:39)
>   at 
> oadd.org.apache.drill.exec.vector.NullableIntVector.clear(NullableIntVector.java:150)
>   at 
> oadd.org.apache.drill.exec.record.SimpleVectorWrapper.clear(SimpleVectorWrapper.java:84)
>   at 
> oadd.org.apache.drill.exec.record.VectorContainer.zeroVectors(VectorContainer.java:312)
>   at 
> oadd.org.apache.drill.exec.record.VectorContainer.clear(VectorContainer.java:296)
>   at 
> oadd.org.apache.drill.exec.record.RecordBatchLoader.clear(RecordBatchLoader.java:183)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.cleanup(DrillResultSetImpl.java:139)
>   at org.apache.drill.jdbc.impl.DrillCursor.close(DrillCursor.java:333)
>   at 
> oadd.net.hydromatic.avatica.AvaticaResultSet.close(AvaticaResultSet.java:110)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.close(DrillResultSetImpl.java:169)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.executeQuery(DrillTestJdbc.java:233)
>   at 
> org.apache.drill.test.framework.DrillTestJdbc.run(DrillTestJdbc.java:89)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4440) Host file location for Windows incorrect in doc

2016-02-25 Thread Andries Engelbrecht (JIRA)
Andries Engelbrecht created DRILL-4440:
--

 Summary: Host file location for Windows incorrect in doc
 Key: DRILL-4440
 URL: https://issues.apache.org/jira/browse/DRILL-4440
 Project: Apache Drill
  Issue Type: Bug
  Components: Documentation
Reporter: Andries Engelbrecht
Priority: Minor


The hosts file location on the page
https://drill.apache.org/docs/installing-the-driver-on-windows/

show /etc/hosts which is for Linux/Mac.

It should point to 

\Windows\system32\drivers\etc\hosts 

for Windows systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-1431) Drillbit continue to report positive status after hitting out of memory condition

2016-02-25 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167742#comment-15167742
 ] 

Zelaine Fong commented on DRILL-1431:
-

[~norrisl] - Do you know if this is still a problem in the latest version of 
Drill?

> Drillbit continue to report positive status after hitting out of memory 
> condition
> -
>
> Key: DRILL-1431
> URL: https://issues.apache.org/jira/browse/DRILL-1431
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Norris Lee
>Priority: Minor
> Fix For: Future
>
>
> Over time, a drillbit tends to hang. Eg. If a drillbit runs out of memory, 
> the drillbit will still be in the list of drillbits returned by zookeeper. 
> Since Drill Client randomly selects a drillbit from that list, the drillbit 
> it connects to may be unhealthy. Drill Client should do a quick connection 
> check to determine whether drillbit is actually healthy. If not, it should 
> select another drillbit and test again until it has found a healthy drillbit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-3881) Rowkey filter does not get pushed into Scan

2016-02-25 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-3881:

Fix Version/s: (was: Future)
   1.7.0

> Rowkey filter does not get pushed into Scan
> ---
>
> Key: DRILL-3881
> URL: https://issues.apache.org/jira/browse/DRILL-3881
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Khurram Faraaz
>Assignee: Smidth Panchamia
>Priority: Critical
> Fix For: 1.7.0
>
>
> Rowkey filter does not get pushed down into Scan
> 4 node cluster CentOS
> Drill master commit ID: b9afcf8f
> case 1) Rowkey filter does not get pushed into Scan
> {code}
> 0: jdbc:drill:schema=dfs.tmp> explain plan for select 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB') AS 
> RK,CONVERT_FROM(T.`colfam1`.`qual1`,'UTF8') FROM flt_Tbl T WHERE ROW_KEY = 
> CAST('3.0838087E38' AS FLOAT);
> +--+--+
> | text | json |
> +--+--+
> | 00-00Screen
> 00-01  Project(RK=[CONVERT_FROMFLOAT_OB($0)], 
> EXPR$1=[CONVERT_FROMUTF8(ITEM($1, 'qual1'))])
> 00-02SelectionVectorRemover
> 00-03  Filter(condition=[=($0, CAST('3.0838087E38'):FLOAT NOT NULL)])
> 00-04Scan(groupscan=[HBaseGroupScan [HBaseScanSpec=HBaseScanSpec 
> [tableName=flt_Tbl, startRow=null, stopRow=null, filter=null], 
> columns=[`*`]]])
> {code}
> case 2) Rowkey filter does not get pushed into Scan
> {code}
> 0: jdbc:drill:schema=dfs.tmp> explain plan for select 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB') AS 
> RK,CONVERT_FROM(T.`colfam1`.`qual1`,'UTF8') FROM flt_Tbl T WHERE 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB') = CAST('3.0838087E38' AS FLOAT) AND 
> CONVERT_FROM(T.`colfam1`.`qual1`,'UTF8') LIKE '%30838087473969088%' order by 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB') ASC;
> +--+--+
> | text | json |
> +--+--+
> | 00-00Screen
> 00-01  Project(RK=[$0], EXPR$1=[$1])
> 00-02SelectionVectorRemover
> 00-03  Sort(sort0=[$0], dir0=[ASC])
> 00-04Project(RK=[CONVERT_FROMFLOAT_OB($0)], 
> EXPR$1=[CONVERT_FROMUTF8(ITEM($1, 'qual1'))])
> 00-05  SelectionVectorRemover
> 00-06Filter(condition=[AND(=(CONVERT_FROM($0, 'FLOAT_OB'), 
> CAST('3.0838087E38'):FLOAT NOT NULL), LIKE(CONVERT_FROM(ITEM($1, 'qual1'), 
> 'UTF8'), '%30838087473969088%'))])
> 00-07  Scan(groupscan=[HBaseGroupScan 
> [HBaseScanSpec=HBaseScanSpec [tableName=flt_Tbl, startRow=, stopRow=, 
> filter=SingleColumnValueFilter (colfam1, qual1, EQUAL, 
> ^.*\x5CQ30838087473969088\x5CE.*$)], columns=[`*`]]])
> {code}
> Same as case (2) just that ASC is missing in order by clause.
> {code}
> 0: jdbc:drill:schema=dfs.tmp> explain plan for select 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB') AS 
> RK,CONVERT_FROM(T.`colfam1`.`qual1`,'UTF8') FROM flt_Tbl T WHERE 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB') = CAST('3.0838087E38' AS FLOAT) AND 
> CONVERT_FROM(T.`colfam1`.`qual1`,'UTF8') LIKE '%30838087473969088%' order by 
> CONVERT_FROM(ROW_KEY,'FLOAT_OB');
> +--+--+
> | text | json |
> +--+--+
> | 00-00Screen
> 00-01  Project(RK=[$0], EXPR$1=[$1])
> 00-02SelectionVectorRemover
> 00-03  Sort(sort0=[$0], dir0=[ASC])
> 00-04Project(RK=[CONVERT_FROMFLOAT_OB($0)], 
> EXPR$1=[CONVERT_FROMUTF8(ITEM($1, 'qual1'))])
> 00-05  SelectionVectorRemover
> 00-06Filter(condition=[AND(=(CONVERT_FROM($0, 'FLOAT_OB'), 
> CAST('3.0838087E38'):FLOAT NOT NULL), LIKE(CONVERT_FROM(ITEM($1, 'qual1'), 
> 'UTF8'), '%30838087473969088%'))])
> 00-07  Scan(groupscan=[HBaseGroupScan 
> [HBaseScanSpec=HBaseScanSpec [tableName=flt_Tbl, startRow=, stopRow=, 
> filter=SingleColumnValueFilter (colfam1, qual1, EQUAL, 
> ^.*\x5CQ30838087473969088\x5CE.*$)], columns=[`*`]]])
> {code}
> Snippet that creates and inserts data into HBase table.
> {code}
> public static void main(String args[]) throws IOException {
> Configuration conf = HBaseConfiguration.create();
> conf.set("hbase.zookeeper.property.clientPort","5181");
> HBaseAdmin admin = new HBaseAdmin(conf);
> if (admin.tableExists("flt_Tbl")) {
> admin.disableTable("flt_Tbl");
> admin.deleteTable("flt_Tbl");
> }
> HTableDescriptor tableDesc = new
> HTableDescriptor(TableName.valueOf("flt_Tbl"));
> tableDesc.addFamily(new HColumnDescriptor("colfam1"));
> admin.createTable(tableDesc);
> HTable table  = new HTable(conf, "flt_Tbl");
> //for (float i = (float)0.5; i <= 100.00; i += 0.75) {
> for (float i = (float)1.4E-45; i <= Float.MAX_VALUE; i += 
> Float.MAX_VALUE / 64) {
> byte[] bytes = new byte[5];
> org.apache.hadoop.hbase.util.PositionedByteRange br =

[jira] [Created] (DRILL-4439) Improve new unit operator tests to handle operators that expect RawBatchBuffers off of the wire, such as the UnorderedReciever and MergingReciever

2016-02-25 Thread Jason Altekruse (JIRA)
Jason Altekruse created DRILL-4439:
--

 Summary: Improve new unit operator tests to handle operators that 
expect RawBatchBuffers off of the wire, such as the UnorderedReciever and 
MergingReciever
 Key: DRILL-4439
 URL: https://issues.apache.org/jira/browse/DRILL-4439
 Project: Apache Drill
  Issue Type: Test
Reporter: Jason Altekruse
Assignee: Jason Altekruse






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4438) Fix out of memory failure identified by new operator unit tests

2016-02-25 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167597#comment-15167597
 ] 

Jason Altekruse commented on DRILL-4438:


These failing tests will be checked in with the patch for the new unit test 
framework, they will be annotated with @Ignore and in the 
BasicPhysicalOpUnitTest class.

> Fix out of memory failure identified by new operator unit tests
> ---
>
> Key: DRILL-4438
> URL: https://issues.apache.org/jira/browse/DRILL-4438
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Jason Altekruse
>Assignee: Jason Altekruse
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4437) Implement framework for testing operators in isolation

2016-02-25 Thread Jason Altekruse (JIRA)
Jason Altekruse created DRILL-4437:
--

 Summary: Implement framework for testing operators in isolation
 Key: DRILL-4437
 URL: https://issues.apache.org/jira/browse/DRILL-4437
 Project: Apache Drill
  Issue Type: Test
  Components: Tools, Build & Test
Reporter: Jason Altekruse
Assignee: Jason Altekruse
 Fix For: 1.6.0


Most of the tests written for Drill are end-to-end. We spin up a full instance 
of the server, submit one or more SQL queries and check the results.

While integration tests like this are useful for ensuring that all features are 
guaranteed to not break end-user functionality overuse of this approach has 
caused a number of pain points.

Overall the tests end up running a lot of the exact same code, parsing and 
planning many similar queries.

Creating consistent reproductions of issues, especially edge cases found in 
clustered environments can be extremely difficult. Even the simpler case of 
testing cases where operators are able to handle a particular series of 
incoming batches of records has required hacks like generating large enough 
files so that the scanners happen to break them up into separate batches. These 
tests are brittle as they make assumptions about how the scanners will work in 
the future. An example of when this could break, we might do perf evaluation to 
find out we should be producing larger batches in some cases. Existing tests 
that are trying to test multiple batches by producing a few more records than 
the current threshold for batch size would not be testing the same code paths.

We need to make more parts of the system testable without initializing the 
entire Drill server, as well as making the different internal settings and 
state of the server configurable for tests.

This is a first effort to enable testing the physical operators in Drill by 
mocking the components of the system necessary to enable operators to 
initialize and execute.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4438) Fix out of memory failure identified by new operator unit tests

2016-02-25 Thread Jason Altekruse (JIRA)
Jason Altekruse created DRILL-4438:
--

 Summary: Fix out of memory failure identified by new operator unit 
tests
 Key: DRILL-4438
 URL: https://issues.apache.org/jira/browse/DRILL-4438
 Project: Apache Drill
  Issue Type: Bug
Reporter: Jason Altekruse
Assignee: Jason Altekruse
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-3930) Remove direct references to TopLevelAllocator from unit tests

2016-02-25 Thread Jason Altekruse (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Altekruse resolved DRILL-3930.

   Resolution: Fixed
 Assignee: (was: Chris Westin)
Fix Version/s: 1.3.0

> Remove direct references to TopLevelAllocator from unit tests
> -
>
> Key: DRILL-3930
> URL: https://issues.apache.org/jira/browse/DRILL-3930
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Chris Westin
> Fix For: 1.3.0
>
>
> The RootAllocatorFactory should be used throughout the code to allow us to 
> change allocators via configuration or other software choices. Some unit 
> tests still reference TopLevelAllocator directly. We also need to do a better 
> job of handling exceptions that can be handled by close()ing an allocator 
> that isn't in the proper state (remaining open child allocators, outstanding 
> buffers, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-3610) TimestampAdd/Diff (SQL_TSI_) functions

2016-02-25 Thread Oscar Morante (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167542#comment-15167542
 ] 

Oscar Morante commented on DRILL-3610:
--

Tableau seems to use this for some aggregated extracts.

> TimestampAdd/Diff (SQL_TSI_) functions
> --
>
> Key: DRILL-3610
> URL: https://issues.apache.org/jira/browse/DRILL-3610
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Reporter: Andries Engelbrecht
>Assignee: Arina Ielchiieva
> Fix For: Future
>
>
> Add TimestampAdd and TimestampDiff (SQL_TSI) functions for year, quarter, 
> month, week, day, hour, minute, second.
> Examples
> SELECT CAST(TIMESTAMPADD(SQL_TSI_QUARTER,1,Date('2013-03-31'), SQL_DATE) AS 
> `column_quarter`
> FROM `table_in`
> HAVING (COUNT(1) > 0)
> SELECT `table_in`.`datetime` AS `column1`,
>   `table`.`Key` AS `column_Key`,
>   TIMESTAMPDIFF(SQL_TSI_MINUTE,to_timestamp('2004-07-04', 
> '-MM-dd'),`table_in`.`datetime`) AS `sum_datediff_minute`
> FROM `calcs`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4425) Handle blank column names in Hbase in CONVERT_FROM

2016-02-25 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167510#comment-15167510
 ] 

Jason Altekruse commented on DRILL-4425:


I think that makes a lot of sense. Something like hbase_empty_col_name, or I 
guess if this is possible to be found in other systems we shouldn't include the 
source in the name. We can make it configurable, just in case someone manages 
to collide with whatever we pick.

So I guess we would need to add this to the schema registration of sources, as 
well as the record readers that will actually grab data from the source, and we 
would assume that only the sentinel would appear in the rest of planning and 
execution?

> Handle blank column names in Hbase in CONVERT_FROM
> --
>
> Key: DRILL-4425
> URL: https://issues.apache.org/jira/browse/DRILL-4425
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - HBase
>Affects Versions: 1.3.0
> Environment: Apache Drill 1.3 on HortonWorks HDP VM 2.1
>Reporter: Saurabh Nigam
>  Labels: easyfix
>
> Hbase table may contain blank column names & blank column family names. Drill 
> needs to handle such situation.
> I faced the issue when I had a column with blank column name in my Hbase 
> Table. To reproduce it :
> -Create a column without any name in Hbase
> -Try to access it via Drill console
> -Try to use CONVERT_FROM function to convert that data from Base64 encoding 
> to make it readable. You wont be able to convert blank column because you 
> cannot use blank in your query after a dot.
> Something like  this
> SELECT CONVERT_FROM(students. , 'UTF8') AS zipcode 
>  FROM students;
> Where column name is blank
> We need to provide a placeholder for blank column names



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4398) SYSTEM ERROR: IllegalStateException: Memory was leaked by query

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4398:
--
Assignee: Taras Supyk

> SYSTEM ERROR: IllegalStateException: Memory was leaked by query
> ---
>
> Key: DRILL-4398
> URL: https://issues.apache.org/jira/browse/DRILL-4398
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Several queries fail with memory leaked errors
> select tjoin2.rnum, tjoin1.c1, tjoin2.c1 as c1j2, tjoin2.c2 as c2j2 from 
> postgres.public.tjoin1 full outer join postgres.public.tjoin2 on tjoin1.c1 = 
> tjoin2.c1
> select tjoin1.rnum, tjoin1.c1, tjoin2.c1 as c1j2, tjoin2.c2 from 
> postgres.public.tjoin1, lateral ( select tjoin2.c1, tjoin2.c2 from 
> postgres.public.tjoin2 where tjoin1.c1=tjoin2.c1) tjoin2
> SYSTEM ERROR: IllegalStateException: Memory was leaked by query. Memory 
> leaked: (40960)
> Allocator(op:0:0:3:JdbcSubScan) 100/40960/135168/100 
> (res/actual/peak/limit)
> create table TJOIN1 (RNUM integer   not null , C1 integer, C2 integer);
> insert into TJOIN1 (RNUM, C1, C2) values ( 0, 10, 15);
> insert into TJOIN1 (RNUM, C1, C2) values ( 1, 20, 25);
> insert into TJOIN1 (RNUM, C1, C2) values ( 2, NULL, 50);
> create table TJOIN2 (RNUM integer   not null , C1 integer, C2 char(2));
> insert into TJOIN2 (RNUM, C1, C2) values ( 0, 10, 'BB');
> insert into TJOIN2 (RNUM, C1, C2) values ( 1, 15, 'DD');
> insert into TJOIN2 (RNUM, C1, C2) values ( 2, NULL, 'EE');
> insert into TJOIN2 (RNUM, C1, C2) values ( 3, 10, 'FF');



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4404) Java NPE. Unexpected exception during fragment initialization

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4404:
--
Assignee: Taras Supyk

> Java NPE.  Unexpected exception during fragment initialization
> --
>
> Key: DRILL-4404
> URL: https://issues.apache.org/jira/browse/DRILL-4404
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Error: SYSTEM ERROR: NullPointerException
> [Error Id: a290df2c-d3ff-4229-bf26-50d0b6992d77 on centos1:31010]
>   (org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
> during fragment initialization: Internal error: Error while applying rule 
> ReduceExpressionsRule_Project, args 
> [rel#830960:LogicalProject.NONE.ANY([]).[](input=rel#830959:Subset#0.JDBC.postgres.ANY([]).[],EXPR$0=-(*(2,
>  2)))]
> org.apache.drill.exec.work.foreman.Foreman.run():261
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
>   Caused By (java.lang.AssertionError) Internal error: Error while applying 
> rule ReduceExpressionsRule_Project, args 
> [rel#830960:LogicalProject.NONE.ANY([]).[](input=rel#830959:Subset#0.JDBC.postgres.ANY([]).[],EXPR$0=-(*(2,
>  2)))]
> org.apache.calcite.util.Util.newInternal():792
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():251
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp():808
> org.apache.calcite.tools.Programs$RuleSetProgram.run():303
> org.apache.calcite.prepare.PlannerImpl.transform():313
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.doLogicalPlanning():542
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():218
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():252
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():172
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():199
> org.apache.drill.exec.work.foreman.Foreman.runSQL():924
> org.apache.drill.exec.work.foreman.Foreman.run():250
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
>   Caused By (java.lang.NullPointerException) null
> 
> org.apache.drill.exec.planner.logical.DrillOptiq$RexToDrill.visitCall():131
> org.apache.drill.exec.planner.logical.DrillOptiq$RexToDrill.visitCall():79
> org.apache.calcite.rex.RexCall.accept():107
> org.apache.drill.exec.planner.logical.DrillOptiq.toDrill():76
> org.apache.drill.exec.planner.logical.DrillConstExecutor.reduce():162
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions():499
> org.apache.calcite.rel.rules.ReduceExpressionsRule$1.onMatch():241
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch():228
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp():808
> org.apache.calcite.tools.Programs$RuleSetProgram.run():303
> org.apache.calcite.prepare.PlannerImpl.transform():313
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.doLogicalPlanning():542
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():218
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel():252
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():172
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():199
> org.apache.drill.exec.work.foreman.Foreman.runSQL():924
> org.apache.drill.exec.work.foreman.Foreman.run():250
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
> SQLState:  null
> ErrorCode: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4409) projecting literal will result in an empty resultset

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4409:
--
Assignee: Taras Supyk

> projecting literal will result in an empty resultset
> 
>
> Key: DRILL-4409
> URL: https://issues.apache.org/jira/browse/DRILL-4409
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> A query which projects a literal as shown against a Postgres table will 
> result in an empty result set being returned. 
> select 'BB' from postgres.public.tversion



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4395) equi-inner join of two tables in Postgres returns null one of the projected columns

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4395:
--
Assignee: Taras Supyk

> equi-inner join of two tables in Postgres returns null one of the projected 
> columns
> ---
>
> Key: DRILL-4395
> URL: https://issues.apache.org/jira/browse/DRILL-4395
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> This query should return 1,2,3,4 in both columns but returns null in the 
> second column. Both tables are in a Postgres 9.5 server mapped under Drill
> select tint.rnum, tbint.rnum from postgres.public.tint , 
> postgres.public.tbint where tint.cint = tbint.cbint
> create table TINT ( RNUM integer  not null , CINT integer   ) ;
> insert into TINT(RNUM, CINT) values ( 0, NULL);
> insert into TINT(RNUM, CINT) values ( 1, -1);
> insert into TINT(RNUM, CINT) values ( 2, 0);
> insert into TINT(RNUM, CINT) values ( 3, 1);
> insert into TINT(RNUM, CINT) values ( 4, 10);
> create table TBINT ( RNUM integer  not null , CBINT bigint   ) ;
> insert into TBINT(RNUM, CBINT) values ( 0, NULL);
> insert into TBINT(RNUM, CBINT) values ( 1, -1);
> insert into TBINT(RNUM, CBINT) values ( 2, 0);
> insert into TBINT(RNUM, CBINT) values ( 3, 1);
> insert into TBINT(RNUM, CBINT) values ( 4, 10);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4391) browsing metadata via SQLSquirrel shows Postgres indexes, primary and foreign keys as tables

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4391:
--
Assignee: Taras Supyk

> browsing metadata via SQLSquirrel shows Postgres indexes, primary and foreign 
> keys as tables
> 
>
> Key: DRILL-4391
> URL: https://issues.apache.org/jira/browse/DRILL-4391
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata
>Affects Versions: 1.4.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Apache Drill has storage defined to access a Postgres database 
> A schema in the database has several tables which either have indexes, 
> primary keys, foreign keys or combination of them all. 
> When SQLSquirrel presents metadata from the Drill JDBC driver the list of 
> tables will include entries which correspond to the indexes, primary or 
> foreign keys in the schema. The implication being that non-standard JDBC 
> metadata methods to obtain information is being used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4401) multi-table join projection returning character instead of integer type

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4401:
--
Assignee: Taras Supyk

> multi-table join projection returning character instead of integer type
> ---
>
> Key: DRILL-4401
> URL: https://issues.apache.org/jira/browse/DRILL-4401
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> column 8 should be an integer type but is described by drill as character.
> select tj.tj1rnum, tj.c1 tj1c1, tj.c2 tj1c2, tj2rnum, tj.tj2c1, tj.tj2c2,  
> tjoin3.rnum tj3rnum, tjoin3.c1 tj3c1, tjoin3.c2 tj3c2 from   (select 
> tjoin1.rnum tj1rnum, tjoin1.c1, tjoin2.c2, tjoin2.rnum tj2rnum,tjoin2.c1 
> tj2c1, tjoin2.c2 tj2c2   from postgres.public.tjoin1 left outer join 
> postgres.public.tjoin2 on tjoin1.c1=tjoin2.c1) tj  left outer join 
> postgres.public.tjoin3 on tj.c1=tjoin3.c1  
> create table TJOIN1 (RNUM integer   not null , C1 integer, C2 integer);
> insert into TJOIN1 (RNUM, C1, C2) values ( 0, 10, 15);
> insert into TJOIN1 (RNUM, C1, C2) values ( 1, 20, 25);
> insert into TJOIN1 (RNUM, C1, C2) values ( 2, NULL, 50);
> create table TJOIN2 (RNUM integer   not null , C1 integer, C2 char(2));
> insert into TJOIN2 (RNUM, C1, C2) values ( 0, 10, 'BB');
> insert into TJOIN2 (RNUM, C1, C2) values ( 1, 15, 'DD');
> insert into TJOIN2 (RNUM, C1, C2) values ( 2, NULL, 'EE');
> insert into TJOIN2 (RNUM, C1, C2) values ( 3, 10, 'FF');
> create table TJOIN3 (RNUM integer   not null , C1 integer, C2 char(2));
> insert into TJOIN3 (RNUM, C1, C2) values ( 0, 10, 'XX');
> insert into TJOIN3 (RNUM, C1, C2) values ( 1, 15, 'YY');
> create table TJOIN4 (RNUM integer   not null , C1 integer, C2 char(2));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4403) AssertionError: Internal error: Conversion to relational algebra failed to preserve datatypes

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4403:
--
Assignee: Taras Supyk

>  AssertionError: Internal error: Conversion to relational algebra failed to 
> preserve datatypes
> --
>
> Key: DRILL-4403
> URL: https://issues.apache.org/jira/browse/DRILL-4403
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> select rnum, c1, c2, c3, stddev_pop( c3 ) over(partition by c1) from 
> postgres.public.tolap
> Error: SYSTEM ERROR: AssertionError: Internal error: Conversion to relational 
> algebra failed to preserve datatypes:
> validated type:
> RecordType(INTEGER NOT NULL rnum, CHAR(3) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c1, CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c2, INTEGER c3, INTEGER EXPR$4) NOT NULL
> converted type:
> RecordType(INTEGER NOT NULL rnum, CHAR(3) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c1, CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c2, INTEGER c3, DOUBLE EXPR$4) NOT NULL
> rel:
> LogicalProject(rnum=[$0], c1=[$1], c2=[$2], c3=[$3], 
> EXPR$4=[POWER(/(CastHigh(-(SUM(*(CastHigh($3), CastHigh($3))) OVER (PARTITION 
> BY $1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING), 
> /(*(SUM(CastHigh($3)) OVER (PARTITION BY $1 RANGE BETWEEN UNBOUNDED PRECEDING 
> AND UNBOUNDED FOLLOWING), SUM(CastHigh($3)) OVER (PARTITION BY $1 RANGE 
> BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)), COUNT(CastHigh($3)) 
> OVER (PARTITION BY $1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED 
> FOLLOWING, COUNT(CastHigh($3)) OVER (PARTITION BY $1 RANGE BETWEEN 
> UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)), 0.5)])
>   LogicalTableScan(table=[[postgres, public, tolap]])
> [Error Id: 61be4aa1-6486-4118-a82b-86c22b551bb5 on centos1:31010]
>   (org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
> during fragment initialization: Internal error: Conversion to relational 
> algebra failed to preserve datatypes:
> validated type:
> RecordType(INTEGER NOT NULL rnum, CHAR(3) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c1, CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c2, INTEGER c3, INTEGER EXPR$4) NOT NULL
> converted type:
> RecordType(INTEGER NOT NULL rnum, CHAR(3) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c1, CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c2, INTEGER c3, DOUBLE EXPR$4) NOT NULL
> rel:
> LogicalProject(rnum=[$0], c1=[$1], c2=[$2], c3=[$3], 
> EXPR$4=[POWER(/(CastHigh(-(SUM(*(CastHigh($3), CastHigh($3))) OVER (PARTITION 
> BY $1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING), 
> /(*(SUM(CastHigh($3)) OVER (PARTITION BY $1 RANGE BETWEEN UNBOUNDED PRECEDING 
> AND UNBOUNDED FOLLOWING), SUM(CastHigh($3)) OVER (PARTITION BY $1 RANGE 
> BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)), COUNT(CastHigh($3)) 
> OVER (PARTITION BY $1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED 
> FOLLOWING, COUNT(CastHigh($3)) OVER (PARTITION BY $1 RANGE BETWEEN 
> UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)), 0.5)])
>   LogicalTableScan(table=[[postgres, public, tolap]])
> org.apache.drill.exec.work.foreman.Foreman.run():261
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
>   Caused By (java.lang.AssertionError) Internal error: Conversion to 
> relational algebra failed to preserve datatypes:
> validated type:
> RecordType(INTEGER NOT NULL rnum, CHAR(3) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c1, CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c2, INTEGER c3, INTEGER EXPR$4) NOT NULL
> converted type:
> RecordType(INTEGER NOT NULL rnum, CHAR(3) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c1, CHAR(2) CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary" c2, INTEGER c3, DOUBLE EXPR$4) NOT NULL
> rel:
> LogicalProject(rnum=[$0], c1=[$1], c2=[$2], c3=[$3], 
> EXPR$4=[POWER(/(CastHigh(-(SUM(*(CastHigh($3), CastHigh($3))) OVER (PARTITION 
> BY $1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING), 
> /(*(SUM(CastHigh($3)) OVER (PARTITION BY $1 RANGE BETWEEN UNBOUNDED PRECEDING 
> AND UNBOUNDED FOLLOWING), SUM(CastHigh($3)) OVER (PARTITION BY $1 RANGE 
> BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)), COUNT(CastHigh($3)) 
> OVER (PARTITION BY $1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED 
> FOLLOWING, COUNT(CastHigh($3)) OVER (PARTITION 

[jira] [Updated] (DRILL-4407) Group by subquery causes Java NPE

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4407:
--
Assignee: Taras Supyk

> Group by subquery causes Java NPE
> -
>
> Key: DRILL-4407
> URL: https://issues.apache.org/jira/browse/DRILL-4407
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> select count(*) from postgres.public.tjoin2  group by ( select c1 from 
> postgres.public.tjoin1 where rnum = 0)
> Error: VALIDATION ERROR: java.lang.NullPointerException
> [Error Id: d3453085-d77c-484e-8df7-f5fadc7bcc7d on centos1:31010]
>   (org.apache.calcite.tools.ValidationException) 
> java.lang.NullPointerException
> org.apache.calcite.prepare.PlannerImpl.validate():189
> org.apache.calcite.prepare.PlannerImpl.validateAndGetType():198
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateNode():451
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert():198
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():167
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():199
> org.apache.drill.exec.work.foreman.Foreman.runSQL():924
> org.apache.drill.exec.work.foreman.Foreman.run():250
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
>   Caused By (java.lang.NullPointerException) null
> 
> org.apache.calcite.sql.validate.SqlValidatorUtil$ExpansionAndDeepCopier.visit():633
> 
> org.apache.calcite.sql.validate.SqlValidatorUtil$ExpansionAndDeepCopier.visit():619
> org.apache.calcite.sql.SqlIdentifier.accept():274
> org.apache.calcite.sql.validate.SqlValidatorUtil$DeepCopier.visit():676
> org.apache.calcite.sql.validate.SqlValidatorUtil$DeepCopier.visit():663
> org.apache.calcite.sql.SqlNodeList.accept():152
> 
> org.apache.calcite.sql.util.SqlShuttle$CallCopyingArgHandler.visitChild():134
> 
> org.apache.calcite.sql.util.SqlShuttle$CallCopyingArgHandler.visitChild():101
> org.apache.calcite.sql.SqlOperator.acceptCall():720
> org.apache.calcite.sql.SqlSelectOperator.acceptCall():128
> 
> org.apache.calcite.sql.validate.SqlValidatorUtil$DeepCopier.visitScoped():686
> org.apache.calcite.sql.validate.SqlScopedShuttle.visit():50
> org.apache.calcite.sql.validate.SqlScopedShuttle.visit():32
> org.apache.calcite.sql.SqlCall.accept():130
> org.apache.calcite.sql.validate.SqlValidatorUtil$DeepCopier.visit():676
> org.apache.calcite.sql.validate.SqlValidatorUtil$DeepCopier.visit():663
> org.apache.calcite.sql.SqlNodeList.accept():152
> 
> org.apache.calcite.sql.validate.SqlValidatorUtil$ExpansionAndDeepCopier.copy():626
> org.apache.calcite.sql.validate.AggregatingSelectScope.():92
> org.apache.calcite.sql.validate.SqlValidatorImpl.registerQuery():2200
> org.apache.calcite.sql.validate.SqlValidatorImpl.registerQuery():2122
> 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression():835
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate():551
> org.apache.calcite.prepare.PlannerImpl.validate():187
> org.apache.calcite.prepare.PlannerImpl.validateAndGetType():198
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateNode():451
> 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert():198
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan():167
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():199
> org.apache.drill.exec.work.foreman.Foreman.runSQL():924
> org.apache.drill.exec.work.foreman.Foreman.run():250
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
> SQLState:  null
> ErrorCode: 0
> create table TJOIN1 (RNUM integer   not null , C1 integer, C2 integer);
> create table TJOIN2 (RNUM integer   not null , C1 integer, C2 char(2));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4399) query using OVERLAPS function executes and returns 0 rows

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4399:
--
Assignee: Taras Supyk

> query using OVERLAPS function executes and returns 0 rows
> -
>
> Key: DRILL-4399
> URL: https://issues.apache.org/jira/browse/DRILL-4399
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Doc set makes not mention of this, but parses and executes
> select 1 from postgres.public.tdt where (date '1999-12-01' , date 
> '2001-12-31' ) overlaps  ( date '2001-01-01' , tdt.cdt ) and rnum=0
> This query executed by Postgres would return 1 row
> create table TDT ( RNUM integer  not null , CDT date   ) ;
> comment on table TDT is 'This describes table TDT.';
> grant select on table TDT to public;
> insert into TDT(RNUM, CDT) values ( 0, NULL);
> insert into TDT(RNUM, CDT) values ( 1, DATE '1996-01-01');
> insert into TDT(RNUM, CDT) values ( 2, DATE '2000-01-01');
> insert into TDT(RNUM, CDT) values ( 3, DATE '2000-12-31');



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4402) pushing unsupported full outer join to Postgres

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4402:
--
Assignee: Taras Supyk

> pushing unsupported full outer join to Postgres
> ---
>
> Key: DRILL-4402
> URL: https://issues.apache.org/jira/browse/DRILL-4402
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
> SQL query. 
> sql SELECT *
> FROM "public"."tjoin1"
> FULL JOIN "public"."tjoin2" ON "tjoin1"."c1" < "tjoin2"."c1"
> plugin postgres
> Fragment 0:0
> [Error Id: bc54cf76-f4ff-474c-b3df-fa357bdf0ff8 on centos1:31010]
>   (org.postgresql.util.PSQLException) ERROR: FULL JOIN is only supported with 
> merge-joinable or hash-joinable join conditions
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse():2182
> org.postgresql.core.v3.QueryExecutorImpl.processResults():1911
> org.postgresql.core.v3.QueryExecutorImpl.execute():173
> org.postgresql.jdbc.PgStatement.execute():622
> org.postgresql.jdbc.PgStatement.executeWithFlags():458
> org.postgresql.jdbc.PgStatement.executeQuery():374
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup():177
> org.apache.drill.exec.physical.impl.ScanBatch.():108
> org.apache.drill.exec.physical.impl.ScanBatch.():136
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():40
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():33
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():147
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec():101
> org.apache.drill.exec.physical.impl.ImplCreator.getExec():79
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():230
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
> SQLState:  null
> ErrorCode: 0
> create table TJOIN1 (RNUM integer   not null , C1 integer, C2 integer);
> insert into TJOIN1 (RNUM, C1, C2) values ( 0, 10, 15);
> insert into TJOIN1 (RNUM, C1, C2) values ( 1, 20, 25);
> insert into TJOIN1 (RNUM, C1, C2) values ( 2, NULL, 50);
> create table TJOIN2 (RNUM integer   not null , C1 integer, C2 char(2));
> insert into TJOIN2 (RNUM, C1, C2) values ( 0, 10, 'BB');
> insert into TJOIN2 (RNUM, C1, C2) values ( 1, 15, 'DD');
> insert into TJOIN2 (RNUM, C1, C2) values ( 2, NULL, 'EE');
> insert into TJOIN2 (RNUM, C1, C2) values ( 3, 10, 'FF');



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4397) Add support for NULL treatment in LAG/LEAD functions

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4397:
--
Assignee: Taras Supyk

> Add support for NULL treatment in LAG/LEAD functions
> 
>
> Key: DRILL-4397
> URL: https://issues.apache.org/jira/browse/DRILL-4397
> Project: Apache Drill
>  Issue Type: Wish
>  Components: Execution - Relational Operators
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> Add support re how LAG/LEAD should respect NULLS per ISO-SQL specification



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4408) re-written query projecting an aggregate on a boolean not supported by Postgres

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4408:
--
Assignee: Taras Supyk

> re-written query projecting an aggregate on a boolean not supported by 
> Postgres
> ---
>
> Key: DRILL-4408
> URL: https://issues.apache.org/jira/browse/DRILL-4408
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> select rnum, c1, c2 from postgres.public.tset1 as t1 where exists ( select c1 
> from postgres.public.tset2 where c1 = t1.c1 )
> Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
> SQL query. 
> sql SELECT *
> FROM "public"."tset1"
> INNER JOIN (SELECT "c10", MIN("$f0") AS "$f1"
> FROM (SELECT "t0"."c1" AS "c10", TRUE AS "$f0"
> FROM "public"."tset2"
> INNER JOIN (SELECT "c1"
> FROM (SELECT "c1"
> FROM "public"."tset1") AS "t"
> GROUP BY "c1") AS "t0" ON "tset2"."c1" = "t0"."c1") AS "t1"
> GROUP BY "c10") AS "t2" ON "tset1"."c1" = "t2"."c10"
> plugin postgres
> Fragment 0:0
> [Error Id: a00cd446-f168-463c-b2b9-bb3d6b43e729 on centos1:31010]
>   (org.postgresql.util.PSQLException) ERROR: function min(boolean) does not 
> exist
>   Hint: No function matches the given name and argument types. You might need 
> to add explicit type casts.
>   Position: 58
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse():2182
> org.postgresql.core.v3.QueryExecutorImpl.processResults():1911
> org.postgresql.core.v3.QueryExecutorImpl.execute():173
> org.postgresql.jdbc.PgStatement.execute():622
> org.postgresql.jdbc.PgStatement.executeWithFlags():458
> org.postgresql.jdbc.PgStatement.executeQuery():374
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup():177
> org.apache.drill.exec.physical.impl.ScanBatch.():108
> org.apache.drill.exec.physical.impl.ScanBatch.():136
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():40
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():33
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():147
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec():101
> org.apache.drill.exec.physical.impl.ImplCreator.getExec():79
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():230
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
> SQLState:  null
> ErrorCode: 0
> create table TSET1 (RNUM integer   not null , C1 integer, C2 char(3));
> create table TSET2 (RNUM integer   not null , C1 integer, C2 char(3));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4396) Generates invalid cast specification in re-written query to Postgres

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4396:
--
Assignee: Taras Supyk

> Generates invalid cast specification in re-written query to Postgres
> 
>
> Key: DRILL-4396
> URL: https://issues.apache.org/jira/browse/DRILL-4396
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> select vint.rnum, tflt.rnum from postgres.public.vint , postgres.public.tflt 
> where vint.cint = tflt.cflt
> Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
> SQL query. 
> sql SELECT *
> FROM (SELECT "rnum", CAST("cint" AS DOUBLE) AS "$f2"
> FROM "public"."vint") AS "t"
> INNER JOIN "public"."tflt" ON "t"."$f2" = "tflt"."cflt"
> plugin postgres
> Fragment 0:0
> [Error Id: 9985ca6b-1faf-43e0-9465-b7a6e8876c6d on centos1:31010]
>   (org.postgresql.util.PSQLException) ERROR: type "double" does not exist
>   Position: 46
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse():2182
> org.postgresql.core.v3.QueryExecutorImpl.processResults():1911
> org.postgresql.core.v3.QueryExecutorImpl.execute():173
> org.postgresql.jdbc.PgStatement.execute():622
> org.postgresql.jdbc.PgStatement.executeWithFlags():458
> org.postgresql.jdbc.PgStatement.executeQuery():374
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup():177
> org.apache.drill.exec.physical.impl.ScanBatch.():108
> org.apache.drill.exec.physical.impl.ScanBatch.():136
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():40
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():33
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():147
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec():101
> org.apache.drill.exec.physical.impl.ImplCreator.getExec():79
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():230
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
> SQLState:  null
> ErrorCode: 0
> create table TINT ( RNUM integer  not null , CINT integer   ) ;
> create view VINT as select * from TINT;
> create table TFLT ( RNUM integer  not null , CFLT float   ) ;
> create view VFLT as select * from TFLT;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4405) invalid Postgres SQL generated for CONCAT (literal, literal)

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4405:
--
Assignee: Taras Supyk

> invalid Postgres SQL generated for CONCAT (literal, literal) 
> -
>
> Key: DRILL-4405
> URL: https://issues.apache.org/jira/browse/DRILL-4405
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
>
> select concat( 'FF' , 'FF' )  from postgres.public.tversion
> Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
> SQL query. 
> sql SELECT CAST('' AS ANY) AS "EXPR$0"
> FROM "public"."tversion"
> plugin postgres
> Fragment 0:0
> [Error Id: c3f24106-8d75-4a57-a638-ac5f0aca0769 on centos1:31010]
>   (org.postgresql.util.PSQLException) ERROR: syntax error at or near "ANY"
>   Position: 23
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse():2182
> org.postgresql.core.v3.QueryExecutorImpl.processResults():1911
> org.postgresql.core.v3.QueryExecutorImpl.execute():173
> org.postgresql.jdbc.PgStatement.execute():622
> org.postgresql.jdbc.PgStatement.executeWithFlags():458
> org.postgresql.jdbc.PgStatement.executeQuery():374
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
> org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup():177
> org.apache.drill.exec.physical.impl.ScanBatch.():108
> org.apache.drill.exec.physical.impl.ScanBatch.():136
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():40
> org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():33
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():147
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec():101
> org.apache.drill.exec.physical.impl.ImplCreator.getExec():79
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():230
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
> SQLState:  null
> ErrorCode: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4406) extract () Error: SYSTEM ERROR: ClassCastException. Caused By (java.lang.ClassCastException)

2016-02-25 Thread Jacques Nadeau (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacques Nadeau updated DRILL-4406:
--
Assignee: Taras Supyk

> extract () Error: SYSTEM ERROR: ClassCastException. Caused By 
> (java.lang.ClassCastException) 
> -
>
> Key: DRILL-4406
> URL: https://issues.apache.org/jira/browse/DRILL-4406
> Project: Apache Drill
>  Issue Type: Bug
>  Components:  Server
>Affects Versions: 1.5.0
>Reporter: N Campbell
>Assignee: Taras Supyk
> Fix For: 1.5.0
>
>
> Trying to extract( ) from a Postgres timestamp column fails
> create table TTS ( RNUM integer  not null , CTS timestamp(3 ) ) ;
> Error: SYSTEM ERROR: ClassCastException
> [Error Id: 4a6a1f6e-1caa-42c4-b44c-8db62146 on centos1:31010]
>   (org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception 
> during fragment initialization: null
> org.apache.drill.exec.work.foreman.Foreman.run():261
> java.util.concurrent.ThreadPoolExecutor.runWorker():1142
> java.util.concurrent.ThreadPoolExecutor$Worker.run():617
> java.lang.Thread.run():745
>   Caused By (java.lang.ClassCastException) null
> SQLState:  null
> ErrorCode: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4425) Handle blank column names in Hbase in CONVERT_FROM

2016-02-25 Thread Jacques Nadeau (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167482#comment-15167482
 ] 

Jacques Nadeau commented on DRILL-4425:
---

Maybe a sentinel value makes sense here?

> Handle blank column names in Hbase in CONVERT_FROM
> --
>
> Key: DRILL-4425
> URL: https://issues.apache.org/jira/browse/DRILL-4425
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - HBase
>Affects Versions: 1.3.0
> Environment: Apache Drill 1.3 on HortonWorks HDP VM 2.1
>Reporter: Saurabh Nigam
>  Labels: easyfix
>
> Hbase table may contain blank column names & blank column family names. Drill 
> needs to handle such situation.
> I faced the issue when I had a column with blank column name in my Hbase 
> Table. To reproduce it :
> -Create a column without any name in Hbase
> -Try to access it via Drill console
> -Try to use CONVERT_FROM function to convert that data from Base64 encoding 
> to make it readable. You wont be able to convert blank column because you 
> cannot use blank in your query after a dot.
> Something like  this
> SELECT CONVERT_FROM(students. , 'UTF8') AS zipcode 
>  FROM students;
> Where column name is blank
> We need to provide a placeholder for blank column names



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)