[jira] [Commented] (DRILL-4196) some TPCDS queries return wrong result when hash join is disabled

2016-01-26 Thread Victoria Markman (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118454#comment-15118454
 ] 

Victoria Markman commented on DRILL-4196:
-

I will do it tonight.

> some TPCDS queries return wrong result when hash join is disabled
> -
>
> Key: DRILL-4196
> URL: https://issues.apache.org/jira/browse/DRILL-4196
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Victoria Markman
>Assignee: amit hadke
> Attachments: 1.5.0-amit-branch_tpcds_sf1.txt, query40.tar, query52.tar
>
>
> With hash join disabled query52.sql and query40.sql returned incorrect result 
> with 1.4.0 :
> {noformat}
> +-+---+-++--++
> | version | commit_id |   
> commit_message|commit_time
>  | build_email  | build_time |
> +-+---+-++--++
> | 1.4.0-SNAPSHOT  | b9068117177c3b47025f52c00f67938e0c3e4732  | DRILL-4165 
> Add a precondition for size of merge join record batch.  | 08.12.2015 @ 
> 01:25:34 UTC  | Unknown  | 08.12.2015 @ 03:36:25 UTC  |
> +-+---+-++--++
> 1 row selected (2.13 seconds)
> {noformat}
> Setup and options are the same as in DRILL-4190
> See attached queries (.sql), expected result (.e_tsv) and actual output (.out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4311) Unexpected exception during fragment initialization: Internal error: Error while applying rule DrillTableRule, args [rel#6431439:EnumerableTableScan.ENUMERABLE.ANY([]).

2016-01-26 Thread Chun Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118421#comment-15118421
 ] 

Chun Chang commented on DRILL-4311:
---

It's possible the hive table was not present (creation failed or it got dropped 
by other test). So, it could be a test issue. 

> Unexpected exception during fragment initialization: Internal error: Error 
> while applying rule DrillTableRule, args 
> [rel#6431439:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[hive, 
> lineitem_text_partitioned_hive_hier_intstring])]
> 
>
> Key: DRILL-4311
> URL: https://issues.apache.org/jira/browse/DRILL-4311
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.5.0
>Reporter: Chun Chang
>
> 1.5.0-SNAPSHOT3d0b4b02521f12e3871d6060c8f9bfce73b218a0
> Hit the following exception while running Functional automation. It's not 
> specific to a query. The same query passed in other runs. So looks random. 
> And feels the current master is less stable than a few days ago.
> {noformat}
> 2016-01-26 05:22:05,991 [29588d02-6fc1-3e49-4e4b-de4cc6205538:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 29588d02-6fc1-3e49-4e4b-de4cc6205538: select l_orderkey, l_partkey, 
> l_quantity, l_shipdate, l_shipinstruct from 
> hive.lineitem_text_partitioned_hive_hier_intstring where `year`=1993 and 
> l_orderkey > 29600 and `month`='nov'
> 2016-01-26 05:22:05,990 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 85 out 
> of 85 using 16 threads. Time: 13ms total, 2.287035ms avg, 3ms max.
> 2016-01-26 05:22:05,982 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 29588d01-bfc1-49db-caa3-baabb0b9ff30: select distinct count(distinct c_row) 
> from data group by c_int order by 1
> 2016-01-26 05:22:05,995 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 85 out 
> of 85 using 16 threads. Earliest start: 400.204000 μs, Latest start: 
> 12264.46 μs, Average start: 5804.976765 μs .
> 2016-01-26 05:22:05,995 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: State to report: RUNNING
> 2016-01-26 05:22:05,997 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: State change requested RUNNING --> 
> FINISHED
> 2016-01-26 05:22:05,997 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: State to report: FINISHED
> 2016-01-26 05:22:05,997 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
> o.a.d.e.p.l.partition.PruneScanRule - Total pruning elapsed time: 128 ms
> 2016-01-26 05:22:06,016 [29588d01-51bd-c95b-a4ef-692ababd0a05:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 29588d01-51bd-c95b-a4ef-692ababd0a05: use `dfs`
> 2016-01-26 05:22:06,137 [29588d01-c725-8642-b99d-e902fd4e7f93:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Time: 0ms total, 0.945990ms avg, 0ms max.
> 2016-01-26 05:22:06,137 [29588d01-c725-8642-b99d-e902fd4e7f93:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Earliest start: 0.219000 μs, Latest start: 0.219000 μs, 
> Average start: 0.219000 μs .
> 2016-01-26 05:22:06,138 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Took 0 ms to get file statuses
> 2016-01-26 05:22:06,139 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 1 out of 
> 1 using 1 threads. Time: 1ms total, 1.486007ms avg, 1ms max.
> 2016-01-26 05:22:06,140 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 1 out of 
> 1 using 1 threads. Earliest start: 0.39 μs, Latest start: 0.39 μs, 
> Average start: 0.39 μs .
> 2016-01-26 05:22:06,140 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Took 1 ms to read file metadata
> 2016-01-26 05:22:06,169 [29588d01-c725-8642-b99d-e902fd4e7f93:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29588d01-c725-8642-b99d-e902fd4e7f93:0:0: State change requested 
> AWAITING

[jira] [Commented] (DRILL-4311) Unexpected exception during fragment initialization: Internal error: Error while applying rule DrillTableRule, args [rel#6431439:EnumerableTableScan.ENUMERABLE.ANY([]).

2016-01-26 Thread Jinfeng Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118402#comment-15118402
 ] 

Jinfeng Ni commented on DRILL-4311:
---

The stack trace shows an NPE hit by the FileSystem. Is it possible that 
something is wrong with the file system on that particular cluster? 

{code}
Caused by: java.lang.NullPointerException: null
at com.mapr.fs.MapRFileSystem.globStatus(MapRFileSystem.java:1241) 
~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
at 
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
 ~[hadoop-mapreduce-client-core-2.7.0-mapr-1506.jar:na]
at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) 
~[hadoop-mapreduce-client-core-2.7.0-mapr-1506.jar:na]
at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:317) 
~[hadoop-mapreduce-client-core-2.7.0-mapr-1506.jar:na]
at 
org.apache.drill.exec.store.hive.HiveMetadataProvider$1.run(HiveMetadataProvider.java:253)
 ~[drill-storage-hive-core-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.store.hive.HiveMetadataProvider$1.run(HiveMetadataProvider.java:241)
 ~[drill-storage-hive-core-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.7.0_45]
at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
 ~[hadoop-common-2.7.0-mapr-1506.jar:na]
at 
org.apache.drill.exec.store.hive.HiveMetadataProvider.splitInputWithUGI(HiveMetadataProvider.java:241)
 ~[drill-storage-hive-core-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.store.hive.HiveMetadataProvider.getPartitionInputSplits(HiveMetadataProvider.java:142)
 ~[drill-storage-hive-core-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.store.hive.HiveMetadataProvider.getStats(HiveMetadataProvider.java:105)
 ~[drill-storage-hive-core-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
... 35 common frames omitted
{code}

> Unexpected exception during fragment initialization: Internal error: Error 
> while applying rule DrillTableRule, args 
> [rel#6431439:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[hive, 
> lineitem_text_partitioned_hive_hier_intstring])]
> 
>
> Key: DRILL-4311
> URL: https://issues.apache.org/jira/browse/DRILL-4311
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.5.0
>Reporter: Chun Chang
>
> 1.5.0-SNAPSHOT3d0b4b02521f12e3871d6060c8f9bfce73b218a0
> Hit the following exception while running Functional automation. It's not 
> specific to a query. The same query passed in other runs. So looks random. 
> And feels the current master is less stable than a few days ago.
> {noformat}
> 2016-01-26 05:22:05,991 [29588d02-6fc1-3e49-4e4b-de4cc6205538:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 29588d02-6fc1-3e49-4e4b-de4cc6205538: select l_orderkey, l_partkey, 
> l_quantity, l_shipdate, l_shipinstruct from 
> hive.lineitem_text_partitioned_hive_hier_intstring where `year`=1993 and 
> l_orderkey > 29600 and `month`='nov'
> 2016-01-26 05:22:05,990 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 85 out 
> of 85 using 16 threads. Time: 13ms total, 2.287035ms avg, 3ms max.
> 2016-01-26 05:22:05,982 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 29588d01-bfc1-49db-caa3-baabb0b9ff30: select distinct count(distinct c_row) 
> from data group by c_int order by 1
> 2016-01-26 05:22:05,995 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
> o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 85 out 
> of 85 using 16 threads. Earliest start: 400.204000 μs, Latest start: 
> 12264.46 μs, Average start: 5804.976765 μs .
> 2016-01-26 05:22:05,995 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: State to report: RUNNING
> 2016-01-26 05:22:05,997 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: State change requested RUNNING --> 
> FINISHED
> 2016-01-26 05:22:05,997 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: State to repo

[jira] [Commented] (DRILL-4196) some TPCDS queries return wrong result when hash join is disabled

2016-01-26 Thread amit hadke (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118378#comment-15118378
 ] 

amit hadke commented on DRILL-4196:
---

[~vicky] I think I fixed problem in above PR. 

Could you please verify that we don't see any verification errors with merge 
join?
Branch is https://github.com/amithadke/drill/tree/DRILL-4196

Thanks

> some TPCDS queries return wrong result when hash join is disabled
> -
>
> Key: DRILL-4196
> URL: https://issues.apache.org/jira/browse/DRILL-4196
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Victoria Markman
>Assignee: amit hadke
> Attachments: 1.5.0-amit-branch_tpcds_sf1.txt, query40.tar, query52.tar
>
>
> With hash join disabled query52.sql and query40.sql returned incorrect result 
> with 1.4.0 :
> {noformat}
> +-+---+-++--++
> | version | commit_id |   
> commit_message|commit_time
>  | build_email  | build_time |
> +-+---+-++--++
> | 1.4.0-SNAPSHOT  | b9068117177c3b47025f52c00f67938e0c3e4732  | DRILL-4165 
> Add a precondition for size of merge join record batch.  | 08.12.2015 @ 
> 01:25:34 UTC  | Unknown  | 08.12.2015 @ 03:36:25 UTC  |
> +-+---+-++--++
> 1 row selected (2.13 seconds)
> {noformat}
> Setup and options are the same as in DRILL-4190
> See attached queries (.sql), expected result (.e_tsv) and actual output (.out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4196) some TPCDS queries return wrong result when hash join is disabled

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118374#comment-15118374
 ] 

ASF GitHub Bot commented on DRILL-4196:
---

GitHub user amithadke opened a pull request:

https://github.com/apache/drill/pull/338

DRILL-4196 Fix to stop returning no more data when output batch is fu…

…ll during merge.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amithadke/drill DRILL-4196

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/338.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #338


commit 4cd7800ab0e41bd41c9aecf75b87533c117e187d
Author: Amit Hadke 
Date:   2016-01-27T00:52:25Z

DRILL-4196 Fix to stop returning no more data when output batch is full 
during merge.




> some TPCDS queries return wrong result when hash join is disabled
> -
>
> Key: DRILL-4196
> URL: https://issues.apache.org/jira/browse/DRILL-4196
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Victoria Markman
>Assignee: amit hadke
> Attachments: 1.5.0-amit-branch_tpcds_sf1.txt, query40.tar, query52.tar
>
>
> With hash join disabled query52.sql and query40.sql returned incorrect result 
> with 1.4.0 :
> {noformat}
> +-+---+-++--++
> | version | commit_id |   
> commit_message|commit_time
>  | build_email  | build_time |
> +-+---+-++--++
> | 1.4.0-SNAPSHOT  | b9068117177c3b47025f52c00f67938e0c3e4732  | DRILL-4165 
> Add a precondition for size of merge join record batch.  | 08.12.2015 @ 
> 01:25:34 UTC  | Unknown  | 08.12.2015 @ 03:36:25 UTC  |
> +-+---+-++--++
> 1 row selected (2.13 seconds)
> {noformat}
> Setup and options are the same as in DRILL-4190
> See attached queries (.sql), expected result (.e_tsv) and actual output (.out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4314) Unit Test Framework Enhancement: Schema check for zero-row query

2016-01-26 Thread Sean Hsuan-Yi Chu (JIRA)
Sean Hsuan-Yi Chu created DRILL-4314:


 Summary: Unit Test Framework Enhancement: Schema check for 
zero-row query
 Key: DRILL-4314
 URL: https://issues.apache.org/jira/browse/DRILL-4314
 Project: Apache Drill
  Issue Type: New Feature
  Components: Tools, Build & Test
Reporter: Sean Hsuan-Yi Chu
Assignee: Sean Hsuan-Yi Chu
 Fix For: 1.5.0


Given the improvement for the Limit-Zero is going through development, the unit 
test Framework should offer Schema Check for zero-row query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-3944) Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"

2016-01-26 Thread Parth Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118336#comment-15118336
 ] 

Parth Chandra commented on DRILL-3944:
--

That might work. You would need to do this for all the functions in 
DirectoryExplorers. However, the message might still need improvement.   I 
would suggest catching the UnsupportedOperationException, wrapping it in a 
DrillUserException and providing a better message.
Also, what happens if constant_folding is set to true? Does the UDF work 
correctly and does it get called only once? 
If not then the error message should not suggest that to the end user.
[~jaltekruse] what do you think?


> Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"
> --
>
> Key: DRILL-3944
> URL: https://issues.apache.org/jira/browse/DRILL-3944
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: 1.2.0
>Reporter: Jitendra
>Assignee: Jason Altekruse
> Attachments: newStackTrace.txt
>
>
> We are facing issue with MAXDIR function, below is the query we are using to 
> reproduce this issue.
> 0: jdbc:drill:drillbit=localhost> select maxdir('vspace.wspace', 'freemat2') 
> from vspace.wspace.`freemat2`;
> Error: SYSTEM ERROR: CompileException: Line 75, Column 70: Unknown variable 
> or type "FILE_SEPARATOR"
> Fragment 0:0
> [Error Id: d17c6e48-554d-4934-bc4d-783ca3dc6f51 on 10.10.99.71:31010] 
> (state=,code=0);
> Below are the drillbit logs.
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: RUNNING
> 2015-10-09 21:26:22,038 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested RUNNING --> 
> FINISHED
> 2015-10-09 21:26:22,039 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: FINISHED
> 2015-10-09 21:29:59,281 [29e7ce27-9cad-9d8a-a482-39f54cc7deda:foreman] INFO 
> o.a.d.e.store.mock.MockStorageEngine - Failure while attempting to check for 
> Parquet metadata file.
> java.io.IOException: Open failed for file: /vspace/wspace/freemat2/20151005, 
> error: Invalid argument (22)
> at com.mapr.fs.MapRClientImpl.open(MapRClientImpl.java:212) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at com.mapr.fs.MapRFileSystem.open(MapRFileSystem.java:862) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) 
> ~[hadoop-common-2.5.1-mapr-1503.jar:na]
> at 
> org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:132)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher$MagicStringMatcher.matches(BasicFormatMatcher.java:142)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher.isFileReadable(BasicFormatMatcher.java:112)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isDirReadable(ParquetFormatPlugin.java:256)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isReadable(ParquetFormatPlugin.java:210)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:326)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:153)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:276)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getTable(SimpleCalciteSchema.java:83)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom(CalciteCatalogReader.java:116)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:99)
>  [calcite-core-1.4

[jira] [Created] (DRILL-4313) C++ client should manage load balance of queries

2016-01-26 Thread Parth Chandra (JIRA)
Parth Chandra created DRILL-4313:


 Summary: C++ client should manage load balance of queries
 Key: DRILL-4313
 URL: https://issues.apache.org/jira/browse/DRILL-4313
 Project: Apache Drill
  Issue Type: Improvement
Reporter: Parth Chandra


The current C++ client handles multiple parallel queries over the same 
connection, but that creates a bottleneck as the queries get sent to the same 
drillbit.
The client can manage this more effectively by choosing from a configurable 
pool of connections and round robin queries to them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (DRILL-3739) NPE on select from Hive for HBase table

2016-01-26 Thread Krystal (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krystal closed DRILL-3739.
--

Verified.

> NPE on select from Hive for HBase table
> ---
>
> Key: DRILL-3739
> URL: https://issues.apache.org/jira/browse/DRILL-3739
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: ckran
>Assignee: Venki Korukanti
>Priority: Critical
> Fix For: 1.5.0
>
>
> For a table in HBase or MapR-DB with metadata created in Hive so that it can 
> be accessed through beeline or Hue. From Drill query fail with
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> NullPointerException [Error Id: 1cfd2a36-bc73-4a36-83ee-ac317b8e6cdb]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-3739) NPE on select from Hive for HBase table

2016-01-26 Thread Krystal (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15117854#comment-15117854
 ] 

Krystal commented on DRILL-3739:


git.commit.id.abbrev=3d0b4b0

Verified that bug is fixed.

> NPE on select from Hive for HBase table
> ---
>
> Key: DRILL-3739
> URL: https://issues.apache.org/jira/browse/DRILL-3739
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: ckran
>Assignee: Venki Korukanti
>Priority: Critical
> Fix For: 1.5.0
>
>
> For a table in HBase or MapR-DB with metadata created in Hive so that it can 
> be accessed through beeline or Hue. From Drill query fail with
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> NullPointerException [Error Id: 1cfd2a36-bc73-4a36-83ee-ac317b8e6cdb]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-4312) JDBC PlugIN - MySQL Causes errors in Drill INFORMATION_SCHEMA

2016-01-26 Thread Andries Engelbrecht (JIRA)
Andries Engelbrecht created DRILL-4312:
--

 Summary: JDBC PlugIN - MySQL Causes errors in Drill 
INFORMATION_SCHEMA
 Key: DRILL-4312
 URL: https://issues.apache.org/jira/browse/DRILL-4312
 Project: Apache Drill
  Issue Type: Bug
  Components: Storage - Other
Affects Versions: 1.4.0
Reporter: Andries Engelbrecht


When connecting MySQL with the JDBC PlugIn queries to INFORMATION_SCHEMA fails. 
Specifically for COLUMNS and on mysql.performance_schema.

{query}
SELECT DISTINCT TABLE_SCHEMA as NAME_SPACE, TABLE_NAME as TAB_NAME FROM 
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA <>'INFORMATION_SCHEMA' and 
TABLE_SCHEMA <> 'sys';
{/query}

{result}
Error: SYSTEM ERROR: MySQLSyntaxErrorException: Unknown table engine 
'PERFORMANCE_SCHEMA'

Fragment 0:0
{/result}

{query}
0: jdbc:drill:> select * from INFORMATION_SCHEMA.`COLUMNS` where TABLE_SCHEMA = 
'mysql.performance_schema';
{/query}

{result}
Error: SYSTEM ERROR: MySQLSyntaxErrorException: Unknown table engine 
'PERFORMANCE_SCHEMA'

Fragment 0:0
{/result}



{drillbit.log}
[Error Id: 45d23eb8-0bcf-41e2-84e2-4626e7fb0d33 on drilldemo:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
 ~[drill-common-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:321)
 [drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184)
 [drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290)
 [drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.4.0.jar:1.4.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
Caused by: java.lang.RuntimeException: Exception while reading definition of 
table 'cond_instances'
at 
org.apache.calcite.adapter.jdbc.JdbcTable.getRowType(JdbcTable.java:103) 
~[calcite-core-1.4.0-drill-1.4.0-mapr-r1.jar:1.4.0-drill-1.4.0-mapr-r1]
at 
org.apache.drill.exec.store.ischema.RecordGenerator.scanSchema(RecordGenerator.java:140)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.store.ischema.RecordGenerator.scanSchema(RecordGenerator.java:120)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.store.ischema.RecordGenerator.scanSchema(RecordGenerator.java:120)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.store.ischema.RecordGenerator.scanSchema(RecordGenerator.java:108)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.store.ischema.SelectedTable.getRecordReader(SelectedTable.java:57)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:36)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:147)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:170)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:127)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:170)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:101)
 ~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:79) 
~[drill-java-exec-1.4.0.jar:1.4.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:230)
 [drill-java-exec-1.4.0.jar:1.4.0]
... 4 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown 
table engine 'PERFORMANCE_SCHEMA'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[na:1.8.0_51]
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[na:1.8.0_51]
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[na:1.8.0_51]
at java.lang.reflect.Constructor.newInstance(Constructor.java:422) 
~[na:1.8.0_51]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) 
~[mysql-connector-java-5.1.38

[jira] [Created] (DRILL-4311) Unexpected exception during fragment initialization: Internal error: Error while applying rule DrillTableRule, args [rel#6431439:EnumerableTableScan.ENUMERABLE.ANY([]).[]

2016-01-26 Thread Chun Chang (JIRA)
Chun Chang created DRILL-4311:
-

 Summary: Unexpected exception during fragment initialization: 
Internal error: Error while applying rule DrillTableRule, args 
[rel#6431439:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[hive, 
lineitem_text_partitioned_hive_hier_intstring])]
 Key: DRILL-4311
 URL: https://issues.apache.org/jira/browse/DRILL-4311
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Affects Versions: 1.5.0
Reporter: Chun Chang


1.5.0-SNAPSHOT  3d0b4b02521f12e3871d6060c8f9bfce73b218a0

Hit the following exception while running Functional automation. It's not 
specific to a query. The same query passed in other runs. So looks random. And 
feels the current master is less stable than a few days ago.

{noformat}
2016-01-26 05:22:05,991 [29588d02-6fc1-3e49-4e4b-de4cc6205538:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
29588d02-6fc1-3e49-4e4b-de4cc6205538: select l_orderkey, l_partkey, l_quantity, 
l_shipdate, l_shipinstruct from 
hive.lineitem_text_partitioned_hive_hier_intstring where `year`=1993 and 
l_orderkey > 29600 and `month`='nov'
2016-01-26 05:22:05,990 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 85 out of 
85 using 16 threads. Time: 13ms total, 2.287035ms avg, 3ms max.
2016-01-26 05:22:05,982 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
29588d01-bfc1-49db-caa3-baabb0b9ff30: select distinct count(distinct c_row) 
from data group by c_int order by 1
2016-01-26 05:22:05,995 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 85 out of 
85 using 16 threads. Earliest start: 400.204000 μs, Latest start: 12264.46 
μs, Average start: 5804.976765 μs .
2016-01-26 05:22:05,995 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: 
State to report: RUNNING
2016-01-26 05:22:05,997 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: 
State change requested RUNNING --> FINISHED
2016-01-26 05:22:05,997 [29588d02-0b3c-0b0f-fbac-c219dd631d92:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 29588d02-0b3c-0b0f-fbac-c219dd631d92:0:0: 
State to report: FINISHED
2016-01-26 05:22:05,997 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:foreman] INFO  
o.a.d.e.p.l.partition.PruneScanRule - Total pruning elapsed time: 128 ms
2016-01-26 05:22:06,016 [29588d01-51bd-c95b-a4ef-692ababd0a05:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
29588d01-51bd-c95b-a4ef-692ababd0a05: use `dfs`
2016-01-26 05:22:06,137 [29588d01-c725-8642-b99d-e902fd4e7f93:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 using 
1 threads. Time: 0ms total, 0.945990ms avg, 0ms max.
2016-01-26 05:22:06,137 [29588d01-c725-8642-b99d-e902fd4e7f93:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 using 
1 threads. Earliest start: 0.219000 μs, Latest start: 0.219000 μs, Average 
start: 0.219000 μs .
2016-01-26 05:22:06,138 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
o.a.d.exec.store.parquet.Metadata - Took 0 ms to get file statuses
2016-01-26 05:22:06,139 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 1 out of 1 
using 1 threads. Time: 1ms total, 1.486007ms avg, 1ms max.
2016-01-26 05:22:06,140 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 1 out of 1 
using 1 threads. Earliest start: 0.39 μs, Latest start: 0.39 μs, 
Average start: 0.39 μs .
2016-01-26 05:22:06,140 [29588d01-bfc1-49db-caa3-baabb0b9ff30:foreman] INFO  
o.a.d.exec.store.parquet.Metadata - Took 1 ms to read file metadata
2016-01-26 05:22:06,169 [29588d01-c725-8642-b99d-e902fd4e7f93:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 29588d01-c725-8642-b99d-e902fd4e7f93:0:0: 
State change requested AWAITING_ALLOCATION --> RUNNING
2016-01-26 05:22:06,169 [29588d01-c725-8642-b99d-e902fd4e7f93:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 29588d01-c725-8642-b99d-e902fd4e7f93:0:0: 
State to report: RUNNING
2016-01-26 05:22:06,175 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 29588d02-7206-dac7-a1dd-bb4a99fed1b9:0:0: 
State change requested AWAITING_ALLOCATION --> RUNNING
2016-01-26 05:22:06,175 [29588d02-7206-dac7-a1dd-bb4a99fed1b9:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 29588d02-7206-dac7-a1dd-bb4a99fed1b9:0:0: 
State to report: RUNNING
2016-01-26 05:22:06,247 [29588d01-c725-8642-b99d-e902fd4e7

[jira] [Updated] (DRILL-3944) Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"

2016-01-26 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-3944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-3944:

Attachment: newStackTrace.txt

> Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"
> --
>
> Key: DRILL-3944
> URL: https://issues.apache.org/jira/browse/DRILL-3944
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: 1.2.0
>Reporter: Jitendra
>Assignee: Jason Altekruse
> Attachments: newStackTrace.txt
>
>
> We are facing issue with MAXDIR function, below is the query we are using to 
> reproduce this issue.
> 0: jdbc:drill:drillbit=localhost> select maxdir('vspace.wspace', 'freemat2') 
> from vspace.wspace.`freemat2`;
> Error: SYSTEM ERROR: CompileException: Line 75, Column 70: Unknown variable 
> or type "FILE_SEPARATOR"
> Fragment 0:0
> [Error Id: d17c6e48-554d-4934-bc4d-783ca3dc6f51 on 10.10.99.71:31010] 
> (state=,code=0);
> Below are the drillbit logs.
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: RUNNING
> 2015-10-09 21:26:22,038 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested RUNNING --> 
> FINISHED
> 2015-10-09 21:26:22,039 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: FINISHED
> 2015-10-09 21:29:59,281 [29e7ce27-9cad-9d8a-a482-39f54cc7deda:foreman] INFO 
> o.a.d.e.store.mock.MockStorageEngine - Failure while attempting to check for 
> Parquet metadata file.
> java.io.IOException: Open failed for file: /vspace/wspace/freemat2/20151005, 
> error: Invalid argument (22)
> at com.mapr.fs.MapRClientImpl.open(MapRClientImpl.java:212) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at com.mapr.fs.MapRFileSystem.open(MapRFileSystem.java:862) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) 
> ~[hadoop-common-2.5.1-mapr-1503.jar:na]
> at 
> org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:132)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher$MagicStringMatcher.matches(BasicFormatMatcher.java:142)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher.isFileReadable(BasicFormatMatcher.java:112)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isDirReadable(ParquetFormatPlugin.java:256)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isReadable(ParquetFormatPlugin.java:210)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:326)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:153)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:276)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getTable(SimpleCalciteSchema.java:83)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom(CalciteCatalogReader.java:116)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:99)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:70)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.sql.validate.EmptyScope.getTableNamespace(EmptyScope.java:75)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.sql.validate.DelegatingScope.getTableNamespace(DelegatingScope.java:124)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.sql.validate.Id

[jira] [Commented] (DRILL-3944) Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"

2016-01-26 Thread Arina Ielchiieva (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-3944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15117535#comment-15117535
 ] 

Arina Ielchiieva commented on DRILL-3944:
-

As part of solution we can update the template 
src/main/codegen/templates/DirectoryExplorers.java: moving FILE_SEPARATOR from 
class to method level, which will result in different error, probably more 
obvious:
*org.apache.drill.exec.rpc.RpcException: 
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
UnsupportedOperationException: The partition explorer interface can only be 
used in functions that can be evaluated at planning time. Make sure that the 
planner.enable_constant_folding configuration option is set to true.*
Full stack trace is attached (newStackTrace.txt).

> Drill MAXDIR Unknown variable or type "FILE_SEPARATOR"
> --
>
> Key: DRILL-3944
> URL: https://issues.apache.org/jira/browse/DRILL-3944
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: 1.2.0
>Reporter: Jitendra
>Assignee: Jason Altekruse
>
> We are facing issue with MAXDIR function, below is the query we are using to 
> reproduce this issue.
> 0: jdbc:drill:drillbit=localhost> select maxdir('vspace.wspace', 'freemat2') 
> from vspace.wspace.`freemat2`;
> Error: SYSTEM ERROR: CompileException: Line 75, Column 70: Unknown variable 
> or type "FILE_SEPARATOR"
> Fragment 0:0
> [Error Id: d17c6e48-554d-4934-bc4d-783ca3dc6f51 on 10.10.99.71:31010] 
> (state=,code=0);
> Below are the drillbit logs.
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2015-10-09 21:26:21,972 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: RUNNING
> 2015-10-09 21:26:22,038 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.fragment.FragmentExecutor - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State change requested RUNNING --> 
> FINISHED
> 2015-10-09 21:26:22,039 [29e7cf02-02bf-b007-72f2-52c67c80ea1c:frag:0:0] INFO 
> o.a.d.e.w.f.FragmentStatusReporter - 
> 29e7cf02-02bf-b007-72f2-52c67c80ea1c:0:0: State to report: FINISHED
> 2015-10-09 21:29:59,281 [29e7ce27-9cad-9d8a-a482-39f54cc7deda:foreman] INFO 
> o.a.d.e.store.mock.MockStorageEngine - Failure while attempting to check for 
> Parquet metadata file.
> java.io.IOException: Open failed for file: /vspace/wspace/freemat2/20151005, 
> error: Invalid argument (22)
> at com.mapr.fs.MapRClientImpl.open(MapRClientImpl.java:212) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at com.mapr.fs.MapRFileSystem.open(MapRFileSystem.java:862) 
> ~[maprfs-4.1.0-mapr.jar:4.1.0-mapr]
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:800) 
> ~[hadoop-common-2.5.1-mapr-1503.jar:na]
> at 
> org.apache.drill.exec.store.dfs.DrillFileSystem.open(DrillFileSystem.java:132)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher$MagicStringMatcher.matches(BasicFormatMatcher.java:142)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.BasicFormatMatcher.isFileReadable(BasicFormatMatcher.java:112)
>  ~[drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isDirReadable(ParquetFormatPlugin.java:256)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin$ParquetFormatMatcher.isReadable(ParquetFormatPlugin.java:210)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:326)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:153)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:276)
>  [drill-java-exec-1.2.0.jar:1.2.0]
> at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getTable(SimpleCalciteSchema.java:83)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTableFrom(CalciteCatalogReader.java:116)
>  [calcite-core-1.4.0-drill-r5.jar:1.4.0-drill-r5]
> at 
> org.apache.calcite.prepare.CalciteCatal