[jira] [Commented] (DRILL-4249) DROP TABLE HANGS indefinitely

2016-01-29 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123344#comment-15123344
 ] 

Khurram Faraaz commented on DRILL-4249:
---

And this is from drillbit.out file, it looks to be related to mapr FS client.

{noformat}
2016-01-29 09:50:19,6724 ERROR Cidcache 
fs/client/fileclient/cc/cidcache.cc:1586 Thread: 8396 MoveToNextCldb: No CLDB 
entries, cannot run, sleeping 5 seconds!
2016-01-29 09:50:28,6736 ERROR Cidcache 
fs/client/fileclient/cc/cidcache.cc:1586 Thread: 8396 MoveToNextCldb: No CLDB 
entries, cannot run, sleeping 5 seconds!
2016-01-29 09:54:39
Full thread dump OpenJDK 64-Bit Server VM (25.65-b01 mixed mode):

"2954c936-769a-def3-e59b-70a29ace5af1:foreman" #95 daemon prio=10 os_prio=0 
tid=0x7fd3cc40 nid=0x1bd1 runnable [0x7fd3a1521000]
   java.lang.Thread.State: RUNNABLE
at com.mapr.fs.jni.MapRClient.readdirplus(Native Method)
at com.mapr.fs.MapRClientImpl.listStatus(MapRClientImpl.java:353)
at com.mapr.fs.MapRFileSystem.listMapRStatus(MapRFileSystem.java:1403)
at com.mapr.fs.MapRFileSystem.listStatus(MapRFileSystem.java:1436)
at com.mapr.fs.MapRFileSystem.listStatus(MapRFileSystem.java:78)
at org.apache.hadoop.fs.Globber.listStatus(Globber.java:69)
at org.apache.hadoop.fs.Globber.glob(Globber.java:218)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1710)
at com.mapr.fs.MapRFileSystem.globStatus(MapRFileSystem.java:1270)
at 
org.apache.drill.exec.store.dfs.DrillFileSystem.addRecursiveStatus(DrillFileSystem.java:767)
at 
org.apache.drill.exec.store.dfs.DrillFileSystem.list(DrillFileSystem.java:754)
at 
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.isHomogeneous(WorkspaceSchemaFactory.java:664)
at 
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.dropTable(WorkspaceSchemaFactory.java:692)
at 
org.apache.drill.exec.planner.sql.handlers.DropTableHandler.getPlan(DropTableHandler.java:72)
at 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:218)
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:909)
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

"qtp801808302-94" #94 prio=5 os_prio=0 tid=0x7fd3c8e58000 nid=0x15cb 
waiting on condition [0x7fd3a2824000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0007a42065d0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:513)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:48)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:569)
at java.lang.Thread.run(Thread.java:745)

"BitServer-4" #23 daemon prio=10 os_prio=0 tid=0x7fd3c89c9000 nid=0x3ac8 
runnable [0x7fd3a1221000]
   java.lang.Thread.State: RUNNABLE
at io.netty.channel.epoll.Native.epollWait0(Native Method)
at io.netty.channel.epoll.Native.epollWait(Native.java:148)
at 
io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:180)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:205)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)

"BitServer-3" #22 daemon prio=10 os_prio=0 tid=0x7fd412f42800 nid=0x3ac7 
runnable [0x7fd3a1322000]
   java.lang.Thread.State: RUNNABLE
at io.netty.channel.epoll.Native.epollWait0(Native Method)
at io.netty.channel.epoll.Native.epollWait(Native.java:148)
at 
io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:180)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:205)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)

"Curator-PathChildrenCache-2" #86 daemon prio=5 os_prio=0 
tid=0x7fd3cc578000 nid=0x20ca waiting on condition [0x7fd3a662a000]
   java.lang.Thread.State: WAITING (parking)
at 

[jira] [Commented] (DRILL-4249) DROP TABLE HANGS indefinitely

2016-01-29 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123331#comment-15123331
 ] 

Khurram Faraaz commented on DRILL-4249:
---

Yes the DROP TABLE hang is reproducible on my 4 node cluster, MapR Drill 1.4.0 
GA and MapR FS 5.0.0 GA and JDK8

kill -QUIT 5194 , did not output the stack trace of the hung process to 
standard output.

Doing this (Ctril+\) gave me the stack trace on sqlline prompt, for the hung 
drop table process

{noformat}
0: jdbc:drill:schema=dfs.tmp> drop table tblMD332_wp;
2016-01-29 10:31:26
Full thread dump OpenJDK 64-Bit Server VM (25.65-b01 mixed mode):

"threadDeathWatcher-2-1" #29 daemon prio=1 os_prio=0 tid=0x7ffe40254000 
nid=0x1b51 waiting on condition [0x7ffe305db000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:137)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)

"Client-1" #27 daemon prio=10 os_prio=0 tid=0x7ffe90f01000 nid=0x1b3a 
runnable [0x7ffe30c8c000]
   java.lang.Thread.State: RUNNABLE
at io.netty.channel.epoll.Native.epollWait0(Native Method)
at io.netty.channel.epoll.Native.epollWait(Native.java:148)
at 
io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:180)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:205)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)

"Curator-ServiceCache-0" #26 daemon prio=5 os_prio=0 tid=0x7ffe90e2d000 
nid=0x1ae0 waiting on condition [0x7ffe30ae6000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0006c0033898> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

"Curator-Framework-0" #25 daemon prio=5 os_prio=0 tid=0x7ffe90dde800 
nid=0x1adf waiting on condition [0x7ffe30d8d000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0006c003b878> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.DelayQueue.take(DelayQueue.java:211)
at java.util.concurrent.DelayQueue.take(DelayQueue.java:70)
at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:780)
at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:62)
at 
org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:257)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

"main-EventThread" #24 daemon prio=5 os_prio=0 tid=0x7ffe90dc3000 
nid=0x1ade waiting on condition [0x7ffe30f33000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0006c00436a0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:491)

"main-SendThread(centos-01.qa.lab:5181)" #23 daemon prio=5 os_prio=0 
tid=0x7ffe90dcf000 nid=0x1add runnable [0x7ffe4410c000]
   java.lang.Thread.State: RUNNABLE
at 

[jira] [Commented] (DRILL-4255) SELECT DISTINCT query over JSON data returns UNSUPPORTED OPERATION

2016-01-29 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123630#comment-15123630
 ] 

Zelaine Fong commented on DRILL-4255:
-

In the case where you have non-empty files, but the schema of the data changes 
in the JSON file, are you expecting Drill to NOT return an error?

> SELECT DISTINCT query over JSON data returns UNSUPPORTED OPERATION
> --
>
> Key: DRILL-4255
> URL: https://issues.apache.org/jira/browse/DRILL-4255
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.4.0
> Environment: CentOS
>Reporter: Khurram Faraaz
>
> SELECT DISTINCT over mapr fs generated audit logs (JSON files) results in 
> unsupported operation. An exact query over another set of JSON data returns 
> correct results.
> MapR Drill 1.4.0, commit ID : 9627a80f
> MapRBuildVersion : 5.1.0.36488.GA
> OS : CentOS x86_64 GNU/Linux
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select distinct t.operation from `auditlogs` t;
> Error: UNSUPPORTED_OPERATION ERROR: Hash aggregate does not support schema 
> changes
> Fragment 3:3
> [Error Id: 1233bf68-13da-4043-a162-cf6d98c07ec9 on example.com:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2016-01-08 11:35:35,093 [297060f9-1c7a-b32c-09e8-24b5ad863e73:frag:3:3] INFO  
> o.a.d.e.p.i.aggregate.HashAggBatch - User Error Occurred
> org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION 
> ERROR: Hash aggregate does not support schema changes
> [Error Id: 1233bf68-13da-4043-a162-cf6d98c07ec9 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
>  ~[drill-common-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:144)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:93)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.7.0_65]
> at javax.security.auth.Subject.doAs(Subject.java:415) [na:1.7.0_65]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>  [hadoop-common-2.7.0-mapr-1506.jar:na]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.4.0.jar:1.4.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
> {noformat}
> Query plan for above query.
> {noformat}
> 00-00Screen : rowType = RecordType(ANY operation): rowcount = 141437.16, 
> cumulative cost = {3.4100499276E7 rows, 1.69455861396E8 cpu, 0.0 io, 
> 1.2165858754560001E10 network, 2.738223417605E8 memory}, id = 7572
> 00-01  UnionExchange : rowType = RecordType(ANY operation): rowcount = 
> 141437.16, cumulative cost = {3.408635556E7 rows, 1.6944171768E8 cpu, 0.0 io, 
> 1.2165858754560001E10 network, 2.738223417605E8 memory}, 

[jira] [Commented] (DRILL-3944) Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"

2016-01-29 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123653#comment-15123653
 ] 

Arina Ielchiieva commented on DRILL-3944:
-

[Jason 
Altekruse|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=jaltekruse],
 you are right, there is no such concept in sql, like banning scalar function 
from select list. Maxdir looks like *scalar function* (meaning, it takes # 
parameters and returns one value per row), so it doesn’t seem right to ban it 
from there. 
Taking into account that it's not allowed anywhere else except from clause, in 
a way  it behaves like *table-value function*.

Currently function syntax is the following: 
{code}
select * from vspace.wspace.`freemat2` where dir0 = maxdir('vspace.wspace', 
'freemat2');
{code}
Table repeats two times in query. Is there a case when maxdir IN parameters 
will differ from those in from clause? 
Because table-value design could look like:
{code}
select * from maxdir('vspace.wspace', 'freemat2', ‘dir0’);
{code}

_I am not suggesting changing current design, it's just an observation_

> Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"
> --
>
> Key: DRILL-3944
> URL: https://issues.apache.org/jira/browse/DRILL-3944
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: 1.2.0
>Reporter: Jitendra
>Assignee: Arina Ielchiieva
> Attachments: newStackTrace.txt
>
>
> We are facing issue with MAXDIR function, below is the query we are using to 
> reproduce this issue.
> 0: jdbc:drill:drillbit=localhost> select maxdir('vspace.wspace', 'freemat2') 
> from vspace.wspace.`freemat2`;
> Error: SYSTEM ERROR: CompileException: Line 75, Column 70: Unknown variable 
> or type "FILE_SEPARATOR"
> Fragment 0:0
> [Error Id: d17c6e48-554d-4934-bc4d-783ca3dc6f51 on 10.10.99.71:31010] 
> (state=,code=0);
> Below are the drillbit logs.
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: RUNNING
> 2015-10-09 21:26:22,038 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested RUNNING --> 
> FINISHED
> 2015-10-09 21:26:22,039 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: FINISHED
> 2015-10-09 21:29:59,281 [29e7ce27-9cad-9d8a-a482-39f54cc7deda:foreman] INFO 
> o.a.d.e.store.mock.MockStorageEngine - Failure while attempting to check for 
> Parquet metadata file.
> java.io.IOException: Open failed for file: /vspace/wspace/freemat2/20151005, 
> error: Invalid argument (22)
> at com.mapr.fs.MapRClientImpl.open(MapRClientImpl.java:212) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at com.mapr.fs.MapRFileSystem.open(MapRFileSystem.java:862) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) 
> ~[hadoop-common-2.5.1-mapr-1503.jar:na]
> at 
> org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:132)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher$MagicStringMatcher.matches(BasicFormatMatcher.java:142)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher.isFileReadable(BasicFormatMatcher.java:112)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isDirReadable(ParquetFormatPlugin.java:256)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isReadable(ParquetFormatPlugin.java:210)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:326)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:153)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:276)
>  

[jira] [Commented] (DRILL-4323) Hive Native Reader : A simple count(*) throws Incoming batch has an empty schema error

2016-01-29 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123692#comment-15123692
 ] 

Zelaine Fong commented on DRILL-4323:
-

Latest comment from [~jni] (copy and pasted from an email thread on the dev 
mailing list:

Venki and I did some investigation for DRILL-4323. The issue reported
in DRILL-4323 seems to happen on 1.4.0 release as well. Seems to us
this is not a regression from 1.4.0; it's a regression from 1.3.0
probably.

DRILL-4083 makes the planner to use DrillHiveNativeReader in stead of
HiveReader for "select count(*) from hive_table" query.  However, the
Project after the scan produces empty schema.  Before DRILL-4083,
Drill uses HiveScan, which works fine.

> Hive Native Reader : A simple count(*) throws Incoming batch has an empty 
> schema error
> --
>
> Key: DRILL-4323
> URL: https://issues.apache.org/jira/browse/DRILL-4323
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.5.0
>Reporter: Rahul Challapalli
>Assignee: Sean Hsuan-Yi Chu
>Priority: Critical
> Attachments: error.log
>
>
> git.commit.id.abbrev=3d0b4b0
> A simple count(*) query does not work when hive native reader is enabled
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> +-+
> | EXPR$0  |
> +-+
> | 10  |
> +-+
> 1 row selected (3.074 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `store.hive.optimize_scan_with_native_readers` = true;
> +---++
> |  ok   |summary |
> +---++
> | true  | store.hive.optimize_scan_with_native_readers updated.  |
> +---++
> 1 row selected (0.2 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> Error: SYSTEM ERROR: IllegalStateException: Incoming batch [#1341, 
> ProjectRecordBatch] has an empty schema. This is not allowed.
> Fragment 0:0
> [Error Id: 4c867440-0fd3-4eda-922f-0f5eadcb1463 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> {code}
> Hive DDL for the table :
> {code}
> create table customer
> (
> c_customer_sk int,
> c_customer_id string,
> c_current_cdemo_sk int,
> c_current_hdemo_sk int,
> c_current_addr_sk int,
> c_first_shipto_date_sk int,
> c_first_sales_date_sk int,
> c_salutation string,
> c_first_name string,
> c_last_name string,
> c_preferred_cust_flag string,
> c_birth_day int,
> c_birth_month int,
> c_birth_year int,
> c_birth_country string,
> c_login string,
> c_email_address string,
> c_last_review_date string
> )
> STORED AS PARQUET
> LOCATION '/drill/testdata/customer'
> {code}
> Attached the log file with the stacktrace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-3944) Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"

2016-01-29 Thread Jinfeng Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123639#comment-15123639
 ] 

Jinfeng Ni commented on DRILL-3944:
---

I think maxdir() is special in that it should be evaluated in the constant 
reduction rule during planning time.  If for some reason, constant reduction 
rule did not fire successfully, and pass maxdir() to execution, then we should 
block that before it's passed to execution. Probably, 
UnsupportedOperatorsVisitor or similar place is a good place to check and raise 
exception. 

This is similar to flatten() function, where it can only appear in certain SQL 
clause.  For maxdir(), it never should appear in any SQL clause after Drill 
finishes logical planning. If it does,  then raise exception.





> Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"
> --
>
> Key: DRILL-3944
> URL: https://issues.apache.org/jira/browse/DRILL-3944
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: 1.2.0
>Reporter: Jitendra
>Assignee: Arina Ielchiieva
> Attachments: newStackTrace.txt
>
>
> We are facing issue with MAXDIR function, below is the query we are using to 
> reproduce this issue.
> 0: jdbc:drill:drillbit=localhost> select maxdir('vspace.wspace', 'freemat2') 
> from vspace.wspace.`freemat2`;
> Error: SYSTEM ERROR: CompileException: Line 75, Column 70: Unknown variable 
> or type "FILE_SEPARATOR"
> Fragment 0:0
> [Error Id: d17c6e48-554d-4934-bc4d-783ca3dc6f51 on 10.10.99.71:31010] 
> (state=,code=0);
> Below are the drillbit logs.
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: RUNNING
> 2015-10-09 21:26:22,038 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested RUNNING --> 
> FINISHED
> 2015-10-09 21:26:22,039 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: FINISHED
> 2015-10-09 21:29:59,281 [29e7ce27-9cad-9d8a-a482-39f54cc7deda:foreman] INFO 
> o.a.d.e.store.mock.MockStorageEngine - Failure while attempting to check for 
> Parquet metadata file.
> java.io.IOException: Open failed for file: /vspace/wspace/freemat2/20151005, 
> error: Invalid argument (22)
> at com.mapr.fs.MapRClientImpl.open(MapRClientImpl.java:212) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at com.mapr.fs.MapRFileSystem.open(MapRFileSystem.java:862) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) 
> ~[hadoop-common-2.5.1-mapr-1503.jar:na]
> at 
> org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:132)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher$MagicStringMatcher.matches(BasicFormatMatcher.java:142)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher.isFileReadable(BasicFormatMatcher.java:112)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isDirReadable(ParquetFormatPlugin.java:256)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isReadable(ParquetFormatPlugin.java:210)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:326)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:153)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:276)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getTable(SimpleCalciteSchema.java:83)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom(CalciteCatalogReader.java:116)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> 

[jira] [Created] (DRILL-4327) Fix rawtypes warning emitted by compiler

2016-01-29 Thread Laurent Goujon (JIRA)
Laurent Goujon created DRILL-4327:
-

 Summary: Fix rawtypes warning emitted by compiler
 Key: DRILL-4327
 URL: https://issues.apache.org/jira/browse/DRILL-4327
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Laurent Goujon
Assignee: Laurent Goujon
Priority: Minor


The Drill codebase references lots of rawtypes, which generates lots of warning 
from the compiler.

Since Drill is now compiled with Java 1.7, it should use generic types as much 
as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4326) JDBC Storage Plugin for PostgreSQL does not work

2016-01-29 Thread Akon Dey (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akon Dey updated DRILL-4326:

Description: 
Queries with the JDBC Storage Plugin for PostgreSQL fail with DATA_READ ERROR.

The JDBC Storage Plugin settings in use are:
{code}
{
  "type": "jdbc",
  "driver": "org.postgresql.Driver",
  "url": "jdbc:postgresql://127.0.0.1/test",
  "username": "akon",
  "password": null,
  "enabled": false
}
{code}

Please refer to the following stack for further details:

{noformat}
Akons-MacBook-Pro:drill akon$ 
./distribution/target/apache-drill-1.5.0-SNAPSHOT/apache-drill-1.5.0-SNAPSHOT/bin/drill-embedded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; 
support was removed in 8.0
Jan 29, 2016 9:17:18 AM org.glassfish.jersey.server.ApplicationHandler 
initialize
INFO: Initiating Jersey application, version Jersey: 2.8 2014-04-29 01:25:26...
apache drill 1.5.0-SNAPSHOT
"a little sql for your nosql"
0: jdbc:drill:zk=local> !verbose
verbose: on
0: jdbc:drill:zk=local> use pgdb;
+---+---+
|  ok   |  summary  |
+---+---+
| true  | Default schema changed to [pgdb]  |
+---+---+
1 row selected (0.753 seconds)
0: jdbc:drill:zk=local> select * from ips;
Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010] 
(state=,code=0)
java.sql.SQLException: DATA_READ ERROR: The JDBC storage plugin failed while 
trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:290)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1923)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:73)
at 
net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404)
at 
net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351)
at 
net.hydromatic.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:338)
at 
net.hydromatic.avatica.AvaticaStatement.execute(AvaticaStatement.java:69)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.execute(DrillStatementImpl.java:101)
at sqlline.Commands.execute(Commands.java:841)
at sqlline.Commands.sql(Commands.java:751)
at sqlline.SqlLine.dispatch(SqlLine.java:746)
at sqlline.SqlLine.begin(SqlLine.java:621)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: DATA_READ 
ERROR: The JDBC storage plugin failed while trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
  

[jira] [Updated] (DRILL-4326) JDBC Storage Plugin for PostgreSQL does not work

2016-01-29 Thread Akon Dey (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akon Dey updated DRILL-4326:

Description: 
Queries with the JDBC Storage Plugin for PostgreSQL fail with DATA READ ERROR.

The JDBC Storage Plugin settings in use are:
{code}
{
  "type": "jdbc",
  "driver": "org.postgresql.Driver",
  "url": "jdbc:postgresql://127.0.0.1/test",
  "username": "akon",
  "password": null,
  "enabled": false
}
{code}

Please refer to the following stack for further details:

{noformat}
Akons-MacBook-Pro:drill akon$ 
./distribution/target/apache-drill-1.5.0-SNAPSHOT/apache-drill-1.5.0-SNAPSHOT/bin/drill-embedded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; 
support was removed in 8.0
Jan 29, 2016 9:17:18 AM org.glassfish.jersey.server.ApplicationHandler 
initialize
INFO: Initiating Jersey application, version Jersey: 2.8 2014-04-29 01:25:26...
apache drill 1.5.0-SNAPSHOT
"a little sql for your nosql"
0: jdbc:drill:zk=local> !verbose
verbose: on
0: jdbc:drill:zk=local> use pgdb;
+---+---+
|  ok   |  summary  |
+---+---+
| true  | Default schema changed to [pgdb]  |
+---+---+
1 row selected (0.753 seconds)
0: jdbc:drill:zk=local> select * from ips;
Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010] 
(state=,code=0)
java.sql.SQLException: DATA_READ ERROR: The JDBC storage plugin failed while 
trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:290)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1923)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:73)
at 
net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404)
at 
net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351)
at 
net.hydromatic.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:338)
at 
net.hydromatic.avatica.AvaticaStatement.execute(AvaticaStatement.java:69)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.execute(DrillStatementImpl.java:101)
at sqlline.Commands.execute(Commands.java:841)
at sqlline.Commands.sql(Commands.java:751)
at sqlline.SqlLine.dispatch(SqlLine.java:746)
at sqlline.SqlLine.begin(SqlLine.java:621)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: DATA_READ 
ERROR: The JDBC storage plugin failed while trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
  

[jira] [Commented] (DRILL-4255) SELECT DISTINCT query over JSON data returns UNSUPPORTED OPERATION

2016-01-29 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123834#comment-15123834
 ] 

Khurram Faraaz commented on DRILL-4255:
---

In the case where we have non-empty files, but the schema of the data changes 
in the JSON file, Drill should definitely report a Schema change error.

However, if this system option `store.json.all_text_mode` is set to true, I 
assume that Drill will not return an error, that is because all data is treated 
as string when that option is set to true.

> SELECT DISTINCT query over JSON data returns UNSUPPORTED OPERATION
> --
>
> Key: DRILL-4255
> URL: https://issues.apache.org/jira/browse/DRILL-4255
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.4.0
> Environment: CentOS
>Reporter: Khurram Faraaz
>
> SELECT DISTINCT over mapr fs generated audit logs (JSON files) results in 
> unsupported operation. An exact query over another set of JSON data returns 
> correct results.
> MapR Drill 1.4.0, commit ID : 9627a80f
> MapRBuildVersion : 5.1.0.36488.GA
> OS : CentOS x86_64 GNU/Linux
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select distinct t.operation from `auditlogs` t;
> Error: UNSUPPORTED_OPERATION ERROR: Hash aggregate does not support schema 
> changes
> Fragment 3:3
> [Error Id: 1233bf68-13da-4043-a162-cf6d98c07ec9 on example.com:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2016-01-08 11:35:35,093 [297060f9-1c7a-b32c-09e8-24b5ad863e73:frag:3:3] INFO  
> o.a.d.e.p.i.aggregate.HashAggBatch - User Error Occurred
> org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION 
> ERROR: Hash aggregate does not support schema changes
> [Error Id: 1233bf68-13da-4043-a162-cf6d98c07ec9 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
>  ~[drill-common-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:144)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:93)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.7.0_65]
> at javax.security.auth.Subject.doAs(Subject.java:415) [na:1.7.0_65]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>  [hadoop-common-2.7.0-mapr-1506.jar:na]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.4.0.jar:1.4.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
> {noformat}
> Query plan for above query.
> {noformat}
> 00-00Screen : rowType = RecordType(ANY operation): rowcount = 141437.16, 
> cumulative cost = {3.4100499276E7 rows, 1.69455861396E8 cpu, 0.0 io, 
> 1.2165858754560001E10 network, 2.738223417605E8 memory}, id = 7572
> 00-01  

[jira] [Commented] (DRILL-4255) SELECT DISTINCT query over JSON data returns UNSUPPORTED OPERATION

2016-01-29 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123841#comment-15123841
 ] 

Zelaine Fong commented on DRILL-4255:
-

Does the problem reproduce even when the option `store.json.all_text_mode` is 
set to true?  Your repro/description doesn't explicitly mention that.

> SELECT DISTINCT query over JSON data returns UNSUPPORTED OPERATION
> --
>
> Key: DRILL-4255
> URL: https://issues.apache.org/jira/browse/DRILL-4255
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.4.0
> Environment: CentOS
>Reporter: Khurram Faraaz
>
> SELECT DISTINCT over mapr fs generated audit logs (JSON files) results in 
> unsupported operation. An exact query over another set of JSON data returns 
> correct results.
> MapR Drill 1.4.0, commit ID : 9627a80f
> MapRBuildVersion : 5.1.0.36488.GA
> OS : CentOS x86_64 GNU/Linux
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select distinct t.operation from `auditlogs` t;
> Error: UNSUPPORTED_OPERATION ERROR: Hash aggregate does not support schema 
> changes
> Fragment 3:3
> [Error Id: 1233bf68-13da-4043-a162-cf6d98c07ec9 on example.com:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2016-01-08 11:35:35,093 [297060f9-1c7a-b32c-09e8-24b5ad863e73:frag:3:3] INFO  
> o.a.d.e.p.i.aggregate.HashAggBatch - User Error Occurred
> org.apache.drill.common.exceptions.UserException: UNSUPPORTED_OPERATION 
> ERROR: Hash aggregate does not support schema changes
> [Error Id: 1233bf68-13da-4043-a162-cf6d98c07ec9 ]
> at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
>  ~[drill-common-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:144)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:93)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at java.security.AccessController.doPrivileged(Native Method) 
> [na:1.7.0_65]
> at javax.security.auth.Subject.doAs(Subject.java:415) [na:1.7.0_65]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>  [hadoop-common-2.7.0-mapr-1506.jar:na]
>  at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
>  [drill-java-exec-1.4.0.jar:1.4.0]
> at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.4.0.jar:1.4.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
> {noformat}
> Query plan for above query.
> {noformat}
> 00-00Screen : rowType = RecordType(ANY operation): rowcount = 141437.16, 
> cumulative cost = {3.4100499276E7 rows, 1.69455861396E8 cpu, 0.0 io, 
> 1.2165858754560001E10 network, 2.738223417605E8 memory}, id = 7572
> 00-01  UnionExchange : rowType = RecordType(ANY operation): rowcount = 
> 141437.16, cumulative cost = {3.408635556E7 rows, 1.6944171768E8 cpu, 0.0 io, 
> 1.2165858754560001E10 network, 2.738223417605E8 

[jira] [Commented] (DRILL-4327) Fix rawtypes warning emitted by compiler

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123899#comment-15123899
 ] 

ASF GitHub Bot commented on DRILL-4327:
---

GitHub user laurentgo reopened a pull request:

https://github.com/apache/drill/pull/347

DRILL-4327: Fix rawtypes warnings emitted by compiler

Drill code base references lots of rawtypes, which generates lots of 
warning from the compiler.

As Drill is now compiled with Java 1.7, most of them can be replaced by 
generic types.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/laurentgo/drill laurent/fix-rawtypes-warnings

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #347


commit 747df153d60792442e7b5dc2899a432f6647dffa
Author: Laurent Goujon 
Date:   2016-01-28T03:01:13Z

Fix rawtypes warnings in exec-java

Fixing all the rawtypes warning issues in exec/java-exec module.

commit 74da2c3481589e6da2f371b09c83e2f67db81486
Author: Laurent Goujon 
Date:   2016-01-28T23:30:54Z

Fix rawtypes warnings in drill codebase

Fixing most rawtypes warning issues in drill modules.




> Fix rawtypes warning emitted by compiler
> 
>
> Key: DRILL-4327
> URL: https://issues.apache.org/jira/browse/DRILL-4327
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Laurent Goujon
>Assignee: Laurent Goujon
>Priority: Minor
>
> The Drill codebase references lots of rawtypes, which generates lots of 
> warning from the compiler.
> Since Drill is now compiled with Java 1.7, it should use generic types as 
> much as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DRILL-4295) Obsolete protobuf generated files under protocol/

2016-01-29 Thread Laurent Goujon (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laurent Goujon reassigned DRILL-4295:
-

Assignee: Laurent Goujon

> Obsolete protobuf generated files under protocol/
> -
>
> Key: DRILL-4295
> URL: https://issues.apache.org/jira/browse/DRILL-4295
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build & Test
>Reporter: Laurent Goujon
>Assignee: Laurent Goujon
>Priority: Trivial
>
> The following two files don't have a protobuf definition anymore, and are not 
> generated when running {{mvn process-sources -P proto-compile}} under 
> {{protocol/}}:
> {noformat}
> src/main/java/org/apache/drill/exec/proto/beans/RpcFailure.java
> src/main/java/org/apache/drill/exec/proto/beans/ViewPointer.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4328) Fix for backward compatibility regression caused by DRILL-4198

2016-01-29 Thread Venki Korukanti (JIRA)
Venki Korukanti created DRILL-4328:
--

 Summary: Fix for backward compatibility regression caused by 
DRILL-4198
 Key: DRILL-4328
 URL: https://issues.apache.org/jira/browse/DRILL-4328
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Other
Reporter: Venki Korukanti
Assignee: Venki Korukanti


Revert updates made to StoragePlugin interface in DRILL-4198. Instead add the 
new methods to AbstractStoragePlugin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4313) C++ client - Improve method of drillbit selection from cluster

2016-01-29 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra resolved DRILL-4313.
--
Resolution: Fixed

Fixed in 576271d

> C++ client - Improve method of drillbit selection from cluster
> --
>
> Key: DRILL-4313
> URL: https://issues.apache.org/jira/browse/DRILL-4313
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>
> The current C++ client handles multiple parallel queries over the same 
> connection, but that creates a bottleneck as the queries get sent to the same 
> drillbit.
> The client can manage this more effectively by choosing from a configurable 
> pool of connections and round robin queries to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4326) JDBC Storage Plugin for PostgreSQL does not work

2016-01-29 Thread Akon Dey (JIRA)
Akon Dey created DRILL-4326:
---

 Summary: JDBC Storage Plugin for PostgreSQL does not work
 Key: DRILL-4326
 URL: https://issues.apache.org/jira/browse/DRILL-4326
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Other
Affects Versions: 1.3.0, 1.4.0, 1.5.0
 Environment: Mac OS X JDK 1.8 PostgreSQL 9.4.4 PostgreSQL JDBC jars 
(postgresql-9.2-1004-jdbc4.jar, postgresql-9.1-901-1.jdbc4.jar, )
Reporter: Akon Dey


Queries with the JDBC Storage Plugin for PostgreSQL fail with DATA READ ERROR.

The JDBC Storage Plugin settings in use are:
{code:json}
{
  "type": "jdbc",
  "driver": "org.postgresql.Driver",
  "url": "jdbc:postgresql://127.0.0.1/test",
  "username": "akon",
  "password": null,
  "enabled": false
}
{code}

Please refer to the following stack for further details:

{noformat}
Akons-MacBook-Pro:drill akon$ 
./distribution/target/apache-drill-1.5.0-SNAPSHOT/apache-drill-1.5.0-SNAPSHOT/bin/drill-embedded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; 
support was removed in 8.0
Jan 29, 2016 9:17:18 AM org.glassfish.jersey.server.ApplicationHandler 
initialize
INFO: Initiating Jersey application, version Jersey: 2.8 2014-04-29 01:25:26...
apache drill 1.5.0-SNAPSHOT
"a little sql for your nosql"
0: jdbc:drill:zk=local> !verbose
verbose: on
0: jdbc:drill:zk=local> use pgdb;
+---+---+
|  ok   |  summary  |
+---+---+
| true  | Default schema changed to [pgdb]  |
+---+---+
1 row selected (0.753 seconds)
0: jdbc:drill:zk=local> select * from ips;
Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010] 
(state=,code=0)
java.sql.SQLException: DATA_READ ERROR: The JDBC storage plugin failed while 
trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010]
at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:290)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1923)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:73)
at 
net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404)
at 
net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351)
at 
net.hydromatic.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:338)
at 
net.hydromatic.avatica.AvaticaStatement.execute(AvaticaStatement.java:69)
at 
org.apache.drill.jdbc.impl.DrillStatementImpl.execute(DrillStatementImpl.java:101)
at sqlline.Commands.execute(Commands.java:841)
at sqlline.Commands.sql(Commands.java:751)
at sqlline.SqlLine.dispatch(SqlLine.java:746)
at sqlline.SqlLine.begin(SqlLine.java:621)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: DATA_READ 
ERROR: The JDBC storage plugin failed while trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: 26ada06d-e08d-456a-9289-0dec2089b018 on 10.200.104.128:31010]
at 
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
at 
org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
at 
org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
at org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
at 
org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
at 
org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
at 
org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
at 
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 

[jira] [Commented] (DRILL-4326) JDBC Storage Plugin for PostgreSQL does not work

2016-01-29 Thread Akon Dey (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123839#comment-15123839
 ] 

Akon Dey commented on DRILL-4326:
-

Additional debug information after setting {{exec.errors.verbose}} to {{true}}:

{noformat}
0: jdbc:drill:zk=local> ALTER SESSION SET `exec.errors.verbose` = true;
+---+---+
|  ok   |summary|
+---+---+
| true  | exec.errors.verbose updated.  |
+---+---+
1 row selected (0.098 seconds)
0: jdbc:drill:zk=local> select * from ips;
Error: DATA_READ ERROR: The JDBC storage plugin failed while trying setup the 
SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: dba791eb-7e34-4e87-bd34-47bced92dad8 on 10.200.104.128:31010]

  (org.postgresql.util.PSQLException) ERROR: relation "test.ips" does not exist
  Position: 15
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse():2103
org.postgresql.core.v3.QueryExecutorImpl.processResults():1836
org.postgresql.core.v3.QueryExecutorImpl.execute():257
org.postgresql.jdbc2.AbstractJdbc2Statement.execute():512
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags():374
org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery():254
org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup():177
org.apache.drill.exec.physical.impl.ScanBatch.():108
org.apache.drill.exec.physical.impl.ScanBatch.():136
org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():40
org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():33
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():147
org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
org.apache.drill.exec.physical.impl.ImplCreator.getRootExec():101
org.apache.drill.exec.physical.impl.ImplCreator.getExec():79
org.apache.drill.exec.work.fragment.FragmentExecutor.run():230
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1142
java.util.concurrent.ThreadPoolExecutor$Worker.run():617
java.lang.Thread.run():745 (state=,code=0)
java.sql.SQLException: DATA_READ ERROR: The JDBC storage plugin failed while 
trying setup the SQL query.

sql SELECT *
FROM "test"."ips"
plugin pgdb
Fragment 0:0

[Error Id: dba791eb-7e34-4e87-bd34-47bced92dad8 on 10.200.104.128:31010]

  (org.postgresql.util.PSQLException) ERROR: relation "test.ips" does not exist
  Position: 15
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse():2103
org.postgresql.core.v3.QueryExecutorImpl.processResults():1836
org.postgresql.core.v3.QueryExecutorImpl.execute():257
org.postgresql.jdbc2.AbstractJdbc2Statement.execute():512
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags():374
org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery():254
org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
org.apache.commons.dbcp.DelegatingStatement.executeQuery():208
org.apache.drill.exec.store.jdbc.JdbcRecordReader.setup():177
org.apache.drill.exec.physical.impl.ScanBatch.():108
org.apache.drill.exec.physical.impl.ScanBatch.():136
org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():40
org.apache.drill.exec.store.jdbc.JdbcBatchCreator.getBatch():33
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():147
org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch():127
org.apache.drill.exec.physical.impl.ImplCreator.getChildren():170
org.apache.drill.exec.physical.impl.ImplCreator.getRootExec():101
org.apache.drill.exec.physical.impl.ImplCreator.getExec():79
org.apache.drill.exec.work.fragment.FragmentExecutor.run():230
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1142
java.util.concurrent.ThreadPoolExecutor$Worker.run():617
java.lang.Thread.run():745

at 
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:247)
at 
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:290)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1923)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:73)
at 
net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404)
at 

[jira] [Updated] (DRILL-4313) C++ client - Improve method of drillbit selection from cluster

2016-01-29 Thread Parth Chandra (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Chandra updated DRILL-4313:
-
Summary: C++ client - Improve method of drillbit selection from cluster  
(was: C++ client should manage load balance of queries)

> C++ client - Improve method of drillbit selection from cluster
> --
>
> Key: DRILL-4313
> URL: https://issues.apache.org/jira/browse/DRILL-4313
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Parth Chandra
>
> The current C++ client handles multiple parallel queries over the same 
> connection, but that creates a bottleneck as the queries get sent to the same 
> drillbit.
> The client can manage this more effectively by choosing from a configurable 
> pool of connections and round robin queries to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4308) Aggregate operations on dir columns can be more efficient for certain use cases

2016-01-29 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124340#comment-15124340
 ] 

Jason Altekruse commented on DRILL-4308:


Hey [~amansinha100], I tried re-creating this and I was not able to see this 
behavior. I only created the folder structure on my local machine, but it 
looked like this, I seems to be getting correct results for these types of 
queries.

{code}
0: jdbc:drill:zk=local> select dir0 from mock_data where dir0 = 
mindir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1994  |
+---+
1 row selected (0.127 seconds)
0: jdbc:drill:zk=local> select dir0 from mock_data where dir0 = 
maxdir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1997  |
+---+
1 row selected (0.123 seconds)



Jasons-MacBook-Pro:maxdir jaltekruse$ tree mock_data/
mock_data/
├── 1994
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
├── 1995
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
├── 1996
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
└── 1997
├── Q1
│   └── data.csv
├── Q2
│   └── data.csv
├── Q3
│   └── data.csv
└── Q4
└── data.csv
{code}

> Aggregate operations on dir columns can be more efficient for certain use 
> cases
> --
>
> Key: DRILL-4308
> URL: https://issues.apache.org/jira/browse/DRILL-4308
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
>Affects Versions: 1.4.0
>Reporter: Aman Sinha
>
> For queries that perform plain aggregates or DISTINCT operations on the 
> directory partition columns (dir0, dir1 etc.) and there are no other columns 
> referenced in the query, the performance could be substantially improved by 
> not having to scan the entire dataset.   
> Consider the following types of queries:
> {noformat}
> select  min(dir0) from largetable;
> select  distinct dir0 from largetable;
> {noformat}
> The number of distinct values of dir columns is typically quite small and 
> there's no reason to scan the large table.  This is also come as user 
> feedback from some Drill users.  Of course, if there's any other column 
> referenced in the query (WHERE, ORDER-BY etc.) then we cannot apply this 
> optimization.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4328) Fix for backward compatibility regression caused by DRILL-4198

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124417#comment-15124417
 ] 

ASF GitHub Bot commented on DRILL-4328:
---

Github user jaltekruse commented on the pull request:

https://github.com/apache/drill/pull/348#issuecomment-177013054
  
+1 tested the change by swapping the 1.5 mongo storage plugin jar with the 
1.4 one and it worked.


> Fix for backward compatibility regression caused by DRILL-4198
> --
>
> Key: DRILL-4328
> URL: https://issues.apache.org/jira/browse/DRILL-4328
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
>
> Revert updates made to StoragePlugin interface in DRILL-4198. Instead add the 
> new methods to AbstractStoragePlugin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (DRILL-4308) Aggregate operations on dir columns can be more efficient for certain use cases

2016-01-29 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124340#comment-15124340
 ] 

Jason Altekruse edited comment on DRILL-4308 at 1/29/16 10:27 PM:
--

Hey [~amansinha100], I tried re-creating this and I was not able to see this 
behavior. I only created the folder structure on my local machine, but it 
looked like this, I seem to be getting correct results for these types of 
queries.

{code}
0: jdbc:drill:zk=local> select dir0 from mock_data where dir0 = 
mindir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1994  |
+---+
1 row selected (0.127 seconds)
0: jdbc:drill:zk=local> select dir0 from mock_data where dir0 = 
maxdir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1997  |
+---+
1 row selected (0.123 seconds)



Jasons-MacBook-Pro:maxdir jaltekruse$ tree mock_data/
mock_data/
├── 1994
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
├── 1995
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
├── 1996
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
└── 1997
├── Q1
│   └── data.csv
├── Q2
│   └── data.csv
├── Q3
│   └── data.csv
└── Q4
└── data.csv
{code}


was (Author: jaltekruse):
Hey [~amansinha100], I tried re-creating this and I was not able to see this 
behavior. I only created the folder structure on my local machine, but it 
looked like this, I seems to be getting correct results for these types of 
queries.

{code}
0: jdbc:drill:zk=local> select dir0 from mock_data where dir0 = 
mindir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1994  |
+---+
1 row selected (0.127 seconds)
0: jdbc:drill:zk=local> select dir0 from mock_data where dir0 = 
maxdir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1997  |
+---+
1 row selected (0.123 seconds)



Jasons-MacBook-Pro:maxdir jaltekruse$ tree mock_data/
mock_data/
├── 1994
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
├── 1995
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
├── 1996
│   ├── Q1
│   │   └── data.csv
│   ├── Q2
│   │   └── data.csv
│   ├── Q3
│   │   └── data.csv
│   └── Q4
│   └── data.csv
└── 1997
├── Q1
│   └── data.csv
├── Q2
│   └── data.csv
├── Q3
│   └── data.csv
└── Q4
└── data.csv
{code}

> Aggregate operations on dir columns can be more efficient for certain use 
> cases
> --
>
> Key: DRILL-4308
> URL: https://issues.apache.org/jira/browse/DRILL-4308
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
>Affects Versions: 1.4.0
>Reporter: Aman Sinha
>
> For queries that perform plain aggregates or DISTINCT operations on the 
> directory partition columns (dir0, dir1 etc.) and there are no other columns 
> referenced in the query, the performance could be substantially improved by 
> not having to scan the entire dataset.   
> Consider the following types of queries:
> {noformat}
> select  min(dir0) from largetable;
> select  distinct dir0 from largetable;
> {noformat}
> The number of distinct values of dir columns is typically quite small and 
> there's no reason to scan the large table.  This is also come as user 
> feedback from some Drill users.  Of course, if there's any other column 
> referenced in the query (WHERE, ORDER-BY etc.) then we cannot apply this 
> optimization.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4323) Hive Native Reader : A simple count(*) throws Incoming batch has an empty schema error

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124470#comment-15124470
 ] 

ASF GitHub Bot commented on DRILL-4323:
---

GitHub user hsuanyi opened a pull request:

https://github.com/apache/drill/pull/349

DRILL-4323: When converting HiveParquetScan To DrillParquetScan, do n…

…ot add Project when no column is needed to be read out from Scan (e.g., 
select count(*) from hive.table)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hsuanyi/incubator-drill DRILL-4323

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/349.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #349


commit f06ef505628e12131c48bc7747ca21b007c2d2b4
Author: Hsuan-Yi Chu 
Date:   2016-01-29T21:20:12Z

DRILL-4323: When converting HiveParquetScan To DrillParquetScan, do not add 
Project when no column is needed to be read out from Scan (e.g., select 
count(*) from hive.table)




> Hive Native Reader : A simple count(*) throws Incoming batch has an empty 
> schema error
> --
>
> Key: DRILL-4323
> URL: https://issues.apache.org/jira/browse/DRILL-4323
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.5.0
>Reporter: Rahul Challapalli
>Assignee: Sean Hsuan-Yi Chu
>Priority: Critical
> Attachments: error.log
>
>
> git.commit.id.abbrev=3d0b4b0
> A simple count(*) query does not work when hive native reader is enabled
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> +-+
> | EXPR$0  |
> +-+
> | 10  |
> +-+
> 1 row selected (3.074 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `store.hive.optimize_scan_with_native_readers` = true;
> +---++
> |  ok   |summary |
> +---++
> | true  | store.hive.optimize_scan_with_native_readers updated.  |
> +---++
> 1 row selected (0.2 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> Error: SYSTEM ERROR: IllegalStateException: Incoming batch [#1341, 
> ProjectRecordBatch] has an empty schema. This is not allowed.
> Fragment 0:0
> [Error Id: 4c867440-0fd3-4eda-922f-0f5eadcb1463 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> {code}
> Hive DDL for the table :
> {code}
> create table customer
> (
> c_customer_sk int,
> c_customer_id string,
> c_current_cdemo_sk int,
> c_current_hdemo_sk int,
> c_current_addr_sk int,
> c_first_shipto_date_sk int,
> c_first_sales_date_sk int,
> c_salutation string,
> c_first_name string,
> c_last_name string,
> c_preferred_cust_flag string,
> c_birth_day int,
> c_birth_month int,
> c_birth_year int,
> c_birth_country string,
> c_login string,
> c_email_address string,
> c_last_review_date string
> )
> STORED AS PARQUET
> LOCATION '/drill/testdata/customer'
> {code}
> Attached the log file with the stacktrace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4323) Hive Native Reader : A simple count(*) throws Incoming batch has an empty schema error

2016-01-29 Thread Jinfeng Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124546#comment-15124546
 ] 

Jinfeng Ni commented on DRILL-4323:
---

Can you ask Rahul to verify with your patch? 



> Hive Native Reader : A simple count(*) throws Incoming batch has an empty 
> schema error
> --
>
> Key: DRILL-4323
> URL: https://issues.apache.org/jira/browse/DRILL-4323
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.5.0
>Reporter: Rahul Challapalli
>Assignee: Sean Hsuan-Yi Chu
>Priority: Critical
> Attachments: error.log
>
>
> git.commit.id.abbrev=3d0b4b0
> A simple count(*) query does not work when hive native reader is enabled
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> +-+
> | EXPR$0  |
> +-+
> | 10  |
> +-+
> 1 row selected (3.074 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `store.hive.optimize_scan_with_native_readers` = true;
> +---++
> |  ok   |summary |
> +---++
> | true  | store.hive.optimize_scan_with_native_readers updated.  |
> +---++
> 1 row selected (0.2 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> Error: SYSTEM ERROR: IllegalStateException: Incoming batch [#1341, 
> ProjectRecordBatch] has an empty schema. This is not allowed.
> Fragment 0:0
> [Error Id: 4c867440-0fd3-4eda-922f-0f5eadcb1463 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> {code}
> Hive DDL for the table :
> {code}
> create table customer
> (
> c_customer_sk int,
> c_customer_id string,
> c_current_cdemo_sk int,
> c_current_hdemo_sk int,
> c_current_addr_sk int,
> c_first_shipto_date_sk int,
> c_first_sales_date_sk int,
> c_salutation string,
> c_first_name string,
> c_last_name string,
> c_preferred_cust_flag string,
> c_birth_day int,
> c_birth_month int,
> c_birth_year int,
> c_birth_country string,
> c_login string,
> c_email_address string,
> c_last_review_date string
> )
> STORED AS PARQUET
> LOCATION '/drill/testdata/customer'
> {code}
> Attached the log file with the stacktrace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4308) Aggregate operations on dir columns can be more efficient for certain use cases

2016-01-29 Thread Jason Altekruse (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124349#comment-15124349
 ] 

Jason Altekruse commented on DRILL-4308:


To match your query a little more closely I changed my current schema and fully 
qualified the table name, still same results.

{code}
0: jdbc:drill:zk=local> select dir0 from dfs.mxd.mock_data where dir0 = 
maxdir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1997  |
+---+
1 row selected (0.125 seconds)
0: jdbc:drill:zk=local> select dir0 from dfs.mxd.mock_data where dir0 = 
mindir('dfs.mxd','mock_data') limit 1;
+---+
| dir0  |
+---+
| 1994  |
+---+
1 row selected (0.116 seconds)
{code}

> Aggregate operations on dir columns can be more efficient for certain use 
> cases
> --
>
> Key: DRILL-4308
> URL: https://issues.apache.org/jira/browse/DRILL-4308
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Relational Operators
>Affects Versions: 1.4.0
>Reporter: Aman Sinha
>
> For queries that perform plain aggregates or DISTINCT operations on the 
> directory partition columns (dir0, dir1 etc.) and there are no other columns 
> referenced in the query, the performance could be substantially improved by 
> not having to scan the entire dataset.   
> Consider the following types of queries:
> {noformat}
> select  min(dir0) from largetable;
> select  distinct dir0 from largetable;
> {noformat}
> The number of distinct values of dir columns is typically quite small and 
> there's no reason to scan the large table.  This is also come as user 
> feedback from some Drill users.  Of course, if there's any other column 
> referenced in the query (WHERE, ORDER-BY etc.) then we cannot apply this 
> optimization.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4329) 50 Unit tests are failing with JDK 8

2016-01-29 Thread Deneche A. Hakim (JIRA)
Deneche A. Hakim created DRILL-4329:
---

 Summary: 50 Unit tests are failing with JDK 8
 Key: DRILL-4329
 URL: https://issues.apache.org/jira/browse/DRILL-4329
 Project: Apache Drill
  Issue Type: Sub-task
 Environment: Mac OS
JDK 1.8.0_65
Reporter: Deneche A. Hakim


The following unit tests are failing when building Drill with JDK 1.8.0_65:
{noformat}
  TestFlattenPlanning.testFlattenPlanningAvoidUnnecessaryProject
  TestFrameworkTest {
testRepeatedColumnMatching
testCSVVerificationOfOrder_checkFailure
  }
  Drill2489CallsAfterCloseThrowExceptionsTest {
testClosedDatabaseMetaDataMethodsThrowRight
testClosedPlainStatementMethodsThrowRight
testclosedPreparedStmtOfOpenConnMethodsThrowRight
testClosedResultSetMethodsThrowRight1
testClosedResultSetMethodsThrowRight2
  }
  Drill2769UnsupportedReportsUseSqlExceptionTest {
testPreparedStatementMethodsThrowRight
testPlainStatementMethodsThrowRight
  }
  TestMongoFilterPushDown {
testFilterPushDownIsEqual
testFilterPushDownGreaterThanWithSingleField
testFilterPushDownLessThanWithSingleField
  }
TestHiveStorage {
orderByOnHiveTable
queryingTableWithSerDeInHiveContribJar
queryingTablesInNonDefaultFS
readFromAlteredPartitionedTable
readAllSupportedHiveDataTypesNativeParquet
queryingHiveAvroTable
readAllSupportedHiveDataTypes
convertFromOnHiveBinaryType
  }
  TestInfoSchemaOnHiveStorage {
showDatabases
showTablesFromDb
defaultSchemaHive
defaultTwoLevelSchemaHive
describeTable1
describeTable2
describeTable3
describeTable4
describeTable5
describeTable6
describeTable7
describeTable8
describeTable9
varCharMaxLengthAndDecimalPrecisionInInfoSchema
  }   
  TestSqlStdBasedAuthorization {
showSchemas
showTables_user0
showTables_user1
  }
  TestStorageBasedHiveAuthorization {
showSchemas
showTablesUser0
showTablesUser1
showTablesUser2
  }
  TestViewSupportOnHiveTables {
viewWithSelectFieldsInDef_StarInQuery
testInfoSchemaWithHiveView
viewWithStarInDef_SelectFieldsInQuery1
viewWithStarInDef_SelectFieldsInQuery2
viewWithSelectFieldsInDef_SelectFieldsInQuery
viewWithStarInDef_StarInQuery
  }
  TestGeometryFunctions {
testGeometryPointCreation
testGeometryFromTextCreation
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-1489) Test failures in TestClassTransformation and TestTpchSingleMode under JDK 1.8.

2016-01-29 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim closed DRILL-1489.
---
Resolution: Fixed

This specific unit test doesn't fail anymore. I will open new JIRAs for the 
test that are now failing (lot's of them)

> Test failures in TestClassTransformation and TestTpchSingleMode under JDK 1.8.
> --
>
> Key: DRILL-1489
> URL: https://issues.apache.org/jira/browse/DRILL-1489
> Project: Apache Drill
>  Issue Type: Sub-task
>  Components: Execution - Codegen
>Affects Versions: 0.6.0
>Reporter: Julian Hyde
> Fix For: Future
>
> Attachments: hs_err_pid21519.log
>
>
> I downloaded the 0.6.0-rc1 source tarball, built and ran tests using "mvn 
> install" under JDK 1.8 on mac os x.
> I got 1 error in TestClassTransformation and 3 errors in TestTpchSingleMode:
> {code}
> Tests run: 23, Failures: 0, Errors: 3, Skipped: 7, Time elapsed: 124.805 sec 
> <<< FAILURE! - in org.apache.drill.TestTpchSingleMode
> tpch04(org.apache.drill.TestTpchSingleMode)  Time elapsed: 50.01 sec  <<< 
> ERROR!
> java.lang.Exception: test timed out after 5 milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.drill.exec.client.PrintingResultsListener.await(PrintingResultsListener.java:96)
>   at 
> org.apache.drill.BaseTestQuery.testRunAndPrint(BaseTestQuery.java:162)
>   at org.apache.drill.BaseTestQuery.test(BaseTestQuery.java:191)
>   at 
> org.apache.drill.TestTpchSingleMode.testSingleMode(TestTpchSingleMode.java:32)
>   at 
> org.apache.drill.TestTpchSingleMode.tpch04(TestTpchSingleMode.java:53)
> tpch16(org.apache.drill.TestTpchSingleMode)  Time elapsed: 50.004 sec  <<< 
> ERROR!
> java.lang.Exception: test timed out after 5 milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.drill.exec.client.PrintingResultsListener.await(PrintingResultsListener.java:96)
>   at 
> org.apache.drill.BaseTestQuery.testRunAndPrint(BaseTestQuery.java:162)
>   at org.apache.drill.BaseTestQuery.test(BaseTestQuery.java:191)
>   at 
> org.apache.drill.TestTpchSingleMode.testSingleMode(TestTpchSingleMode.java:32)
>   at 
> org.apache.drill.TestTpchSingleMode.tpch16(TestTpchSingleMode.java:115)
> org.apache.drill.TestTpchSingleMode  Time elapsed: 10.107 sec  <<< ERROR!
> java.lang.IllegalStateException: Failure while trying to close allocator: 
> Child level allocators not closed. Stack trace: 
>   java.lang.Thread.getStackTrace(Thread.java:1551)
>   
> org.apache.drill.exec.memory.TopLevelAllocator.getChildAllocator(TopLevelAllocator.java:115)
>   
> org.apache.drill.exec.ops.FragmentContext.(FragmentContext.java:113)
>   
> org.apache.drill.exec.work.foreman.QueryManager.runFragments(QueryManager.java:100)
>   
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:410)
>   
> org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:427)
>   org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:219)
>   
> org.apache.drill.exec.work.WorkManager$RunnableWrapper.run(WorkManager.java:250)
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   java.lang.Thread.run(Thread.java:744)
>   at 
> org.apache.drill.exec.memory.TopLevelAllocator.close(TopLevelAllocator.java:148)
>   at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:73)
>   at 

[jira] [Updated] (DRILL-4323) Hive Native Reader : A simple count(*) throws Incoming batch has an empty schema error

2016-01-29 Thread Suresh Ollala (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Ollala updated DRILL-4323:
-
Reviewer: Rahul Challapalli

> Hive Native Reader : A simple count(*) throws Incoming batch has an empty 
> schema error
> --
>
> Key: DRILL-4323
> URL: https://issues.apache.org/jira/browse/DRILL-4323
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Hive
>Affects Versions: 1.5.0
>Reporter: Rahul Challapalli
>Assignee: Sean Hsuan-Yi Chu
>Priority: Critical
> Attachments: error.log
>
>
> git.commit.id.abbrev=3d0b4b0
> A simple count(*) query does not work when hive native reader is enabled
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> +-+
> | EXPR$0  |
> +-+
> | 10  |
> +-+
> 1 row selected (3.074 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `store.hive.optimize_scan_with_native_readers` = true;
> +---++
> |  ok   |summary |
> +---++
> | true  | store.hive.optimize_scan_with_native_readers updated.  |
> +---++
> 1 row selected (0.2 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from customer;
> Error: SYSTEM ERROR: IllegalStateException: Incoming batch [#1341, 
> ProjectRecordBatch] has an empty schema. This is not allowed.
> Fragment 0:0
> [Error Id: 4c867440-0fd3-4eda-922f-0f5eadcb1463 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> {code}
> Hive DDL for the table :
> {code}
> create table customer
> (
> c_customer_sk int,
> c_customer_id string,
> c_current_cdemo_sk int,
> c_current_hdemo_sk int,
> c_current_addr_sk int,
> c_first_shipto_date_sk int,
> c_first_sales_date_sk int,
> c_salutation string,
> c_first_name string,
> c_last_name string,
> c_preferred_cust_flag string,
> c_birth_day int,
> c_birth_month int,
> c_birth_year int,
> c_birth_country string,
> c_login string,
> c_email_address string,
> c_last_review_date string
> )
> STORED AS PARQUET
> LOCATION '/drill/testdata/customer'
> {code}
> Attached the log file with the stacktrace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4328) Fix for backward compatibility regression caused by DRILL-4198

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124686#comment-15124686
 ] 

ASF GitHub Bot commented on DRILL-4328:
---

Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/348


> Fix for backward compatibility regression caused by DRILL-4198
> --
>
> Key: DRILL-4328
> URL: https://issues.apache.org/jira/browse/DRILL-4328
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
>
> Revert updates made to StoragePlugin interface in DRILL-4198. Instead add the 
> new methods to AbstractStoragePlugin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4328) Fix for backward compatibility regression caused by DRILL-4198

2016-01-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123942#comment-15123942
 ] 

ASF GitHub Bot commented on DRILL-4328:
---

GitHub user vkorukanti opened a pull request:

https://github.com/apache/drill/pull/348

DRILL-4328: Fix backward compatibility regression caused by DRILL-4198



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vkorukanti/drill fix_comp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/348.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #348


commit 03197d0f2c665b7671b366332e1b4ebc2f271bd9
Author: vkorukanti 
Date:   2016-01-28T22:54:05Z

DRILL-4328: Fix backward compatibility regression caused by DRILL-4198




> Fix for backward compatibility regression caused by DRILL-4198
> --
>
> Key: DRILL-4328
> URL: https://issues.apache.org/jira/browse/DRILL-4328
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Other
>Reporter: Venki Korukanti
>Assignee: Venki Korukanti
>
> Revert updates made to StoragePlugin interface in DRILL-4198. Instead add the 
> new methods to AbstractStoragePlugin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DRILL-3488) Allow Java 1.8

2016-01-29 Thread Deneche A. Hakim (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deneche A. Hakim reassigned DRILL-3488:
---

Assignee: Deneche A. Hakim

> Allow Java 1.8
> --
>
> Key: DRILL-3488
> URL: https://issues.apache.org/jira/browse/DRILL-3488
> Project: Apache Drill
>  Issue Type: Sub-task
>Reporter: Andrew
>Assignee: Deneche A. Hakim
>Priority: Trivial
> Attachments: DRILL-3488.1.patch.txt
>
>
> From my limited testing it seems that Drill works well with either Java 1.7 
> or 1.8. I'd like to change the top-level pom to allow 1.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)