[GitHub] drill issue #585: DRILL-3898 : Sort spill was modified to catch all errors, ...

2016-09-09 Thread paul-rogers
Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/585
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill issue #585: DRILL-3898 : Sort spill was modified to catch all errors, ...

2016-09-09 Thread Ben-Zvi
Github user Ben-Zvi commented on the issue:

https://github.com/apache/drill/pull/585
  
Below are results from testing; first run with not enough disk space, and 
the second with a missing storage for the spill:

0: jdbc:drill:zk=local> create table store_sales_20(ss_item_sk, 
ss_customer_sk, ss_cdemo_sk, ss_hdemo_sk, s_sold_date_sk, ss_promo_sk) 
partition by (ss_promo_sk) as
. . . . . . . . . . . >  select
. . . . . . . . . . . >case when columns[2] = '' then cast(null as 
varchar(100)) else cast(columns[2] as varchar(100)) end,
. . . . . . . . . . . >case when columns[3] = '' then cast(null as 
varchar(100)) else cast(columns[3] as varchar(100)) end,
. . . . . . . . . . . >case when columns[4] = '' then cast(null as 
varchar(100)) else cast(columns[4] as varchar(100)) end, 
. . . . . . . . . . . >case when columns[5] = '' then cast(null as 
varchar(100)) else cast(columns[5] as varchar(100)) end, 
. . . . . . . . . . . >case when columns[0] = '' then cast(null as 
varchar(100)) else cast(columns[0] as varchar(100)) end, 
. . . . . . . . . . . >case when columns[8] = '' then cast(null as 
varchar(100)) else cast(columns[8] as varchar(100)) end
. . . . . . . . . . . > FROM 
dfs.`/Users/boazben-zvi/data/store_sales/store_sales.dat`;
Error: RESOURCE ERROR: External Sort encountered an error while spilling to 
disk

java.io.IOException: No space left on device
Fragment 0:0

[Error Id: 35d13ef6-f88a-4a80-9f5e-ddb15efc9d92 on 10.250.57.63:31010] 
(state=,code=0)
0: jdbc:drill:zk=local> create table store_sales_20(ss_item_sk, 
ss_customer_sk, ss_cdemo_sk, ss_hdemo_sk, s_sold_date_sk, ss_promo_sk) 
partition by (ss_promo_sk) as
. . . . . . . . . . . >  select
. . . . . . . . . . . >case when columns[2] = '' then cast(null as 
varchar(100)) else cast(columns[2] as varchar(100)) end,
. . . . . . . . . . . >case when columns[3] = '' then cast(null as 
varchar(100)) else cast(columns[3] as varchar(100)) end,
. . . . . . . . . . . >case when columns[4] = '' then cast(null as 
varchar(100)) else cast(columns[4] as varchar(100)) end, 
. . . . . . . . . . . >case when columns[5] = '' then cast(null as 
varchar(100)) else cast(columns[5] as varchar(100)) end, 
. . . . . . . . . . . >case when columns[0] = '' then cast(null as 
varchar(100)) else cast(columns[0] as varchar(100)) end, 
. . . . . . . . . . . >case when columns[8] = '' then cast(null as 
varchar(100)) else cast(columns[8] as varchar(100)) end
. . . . . . . . . . . > FROM 
dfs.`/Users/boazben-zvi/data/store_sales/store_sales.dat`;
Error: RESOURCE ERROR: External Sort encountered an error while spilling to 
disk

Mkdirs failed to create 
/tmp/drill/spill/282cbdbc-630a-2218-3871-165491f5e96c_majorfragment0_minorfragment0_operator6
 (exists=false, cwd=file:/Users/boazben-zvi/IdeaProjects/drill)
Fragment 0:0

[Error Id: dea8b3fd-9661-48b5-9a3c-11d2dadf8f07 on 10.250.57.63:31010] 
(state=,code=0)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #585: DRILL-3898 : Sort spill was modified to catch all e...

2016-09-09 Thread Ben-Zvi
GitHub user Ben-Zvi opened a pull request:

https://github.com/apache/drill/pull/585

DRILL-3898 :  Sort spill was modified to catch all errors, ignore rep…

…eated errors while closing the new group and issue a more detailed error 
message.

Seems that the spilling IO can run into various kinds of errors (no space, 
failure to create a file,..) which are thrown as different exception classes. 
Hence changed the catch() statement to catch a more general Throwable , and add 
the exception's message for more detail (e.g., no disk space).

Before the change the "no disk space" Throwable was not caught, and thus 
execution continued.

Also the closing of the newGroup could hit some IO errors (e.g., when 
flushing), so a try/catch was added to ignore those.

Note that this change should also fix  DRILL-4542 ("if external sort fails 
to spill to disk, memory is leaked and wrong error message is displayed"). 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Ben-Zvi/drill DRILL-3898

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/585.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #585


commit e988f1644be1d9fde24a489d94c7dbc54f8e82d8
Author: Boaz Ben-Zvi 
Date:   2016-09-09T23:36:03Z

DRILL-3898 :  Sort spill was modified to catch all errors, ignore repeated 
errors while closing the new group and issue a more detailed error message.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (DRILL-4885) WHERE clause causing a ClassCastException on HBase tables

2016-09-09 Thread Ki Kang (JIRA)
Ki Kang created DRILL-4885:
--

 Summary: WHERE clause causing a ClassCastException on HBase tables
 Key: DRILL-4885
 URL: https://issues.apache.org/jira/browse/DRILL-4885
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.5.0
Reporter: Ki Kang


I am trying to figure out why I am getting a ClassCastException when I do the 
following query.  If I change the “FROM” clause to just “FROM (VALUES(0))” it 
works just fine, but whenever I have “FROM” to an HBase table, I get the error. 
 I know that the HBase table is valid because if I remove the WHERE clause, the 
query does not throw an error.

SELECT b.`date` FROM (
  SELECT TO_DATE(CONCAT(a.`jArray`[0], '-', a.`jArray`[1], '-', a.`jArray`[2]), 
'-MM-dd') `date` FROM (
SELECT CONVERT_FROM(REGEXP_REPLACE('["2016":"08":"03"]', ':', ','), 'JSON') 
`jArray`
--FROM (VALUES(0))
FROM `hbase`.`SomeValidTable`
  ) a
) b
WHERE b.`date` = '2016-08-03'
LIMIT 1

SYSTEM ERROR: ClassCastException: 
org.apache.drill.common.expression.FunctionCall cannot be cast to 
org.apache.drill.common.expression.SchemaPath

<>
From: rahul challapalli 
Date: Thu, Sep 1, 2016 at 11:09 AM
Subject: Re: WHERE clause causing a ClassCastException on HBase tables
To: dev 


This is a bug. The query is failing at the planning state itself. Can you raise 
a jira for the same with the details you posted here?

- Rahul




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DRILL-4877) max(dir0), max(dir1) query against parquet data slower by 2X

2016-09-09 Thread Aman Sinha (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Sinha resolved DRILL-4877.
---
   Resolution: Fixed
Fix Version/s: 1.9.0

Fixed in commit #: 18866d5

> max(dir0), max(dir1) query against parquet data slower by 2X
> 
>
> Key: DRILL-4877
> URL: https://issues.apache.org/jira/browse/DRILL-4877
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Query Planning & Optimization
>Affects Versions: 1.9.0
> Environment: 4 node cluster centos
>Reporter: Khurram Faraaz
>Assignee: Aman Sinha
>Priority: Critical
> Fix For: 1.9.0
>
>
> max(dir0), max(dir1) query against parquet data slower by 2X
> test was run with meta data cache on both 1.7.0 and 1.9.0
> there is a difference in query plan and also execution time on 1.9.0 is close 
> to 2X that on 1.7.0 
> Test from Drill 1.9.0 git commit id: 28d315bb
> on 4 node Centos cluster
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> select max(dir0), max(dir1), max(dir2) from 
> `DRILL_4589`;
> +-+-+-+
> | EXPR$0  | EXPR$1  | EXPR$2  |
> +-+-+-+
> | 2015| Q4  | null|
> +-+-+-+
> 1 row selected (70.644 seconds)
> {noformat}
> Query plan for the above query, note than in Drill 1.9.0 usedMetadataFile is 
> not available is the query plan text.
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> explain plan for select max(dir0), max(dir1), 
> max(dir2) from `DRILL_4589`;
> +--+--+
> | text | json |
> +--+--+
> | 00-00Screen
> 00-01  Project(EXPR$0=[$0], EXPR$1=[$1], EXPR$2=[$2])
> 00-02StreamAgg(group=[{}], EXPR$0=[MAX($0)], EXPR$1=[MAX($1)], 
> EXPR$2=[MAX($2)])
> 00-03  UnionExchange
> 01-01StreamAgg(group=[{}], EXPR$0=[MAX($0)], EXPR$1=[MAX($1)], 
> EXPR$2=[MAX($2)])
> 01-02  Scan(groupscan=[ParquetGroupScan 
> [entries=[ReadEntryWithPath [path=/tmp/DRILL_4589/1990/Q1/f672.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2011/Q4/f162.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2000/Q2/f1101.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1996/Q2/f110.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2006/Q3/f1192.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1999/Q2/f174.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2006/Q4/f885.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2001/Q3/f1720.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2001/Q1/f1779.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1991/Q2/f629.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2003/Q4/f821.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2015/Q3/f896.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2002/Q2/f1458.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2004/Q4/f1756.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2001/Q2/f1490.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2003/Q3/f1137.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2013/Q1/f561.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1990/Q3/f1562.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2003/Q1/f1445.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2006/Q1/f236.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1992/Q4/f1209.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2014/Q2/f518.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1993/Q4/f1598.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2008/Q1/f780.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1999/Q1/f1763.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1990/Q4/f381.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1990/Q1/f1870.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2014/Q1/f915.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2001/Q2/f673.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1998/Q1/f736.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2013/Q2/f749.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2007/Q3/f111.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1993/Q3/f776.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2002/Q1/f403.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2005/Q2/f904.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2000/Q4/f944.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1994/Q2/f506.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1994/Q4/f612.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1991/Q1/f1838.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2012/Q2/f1764.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2010/Q1/f684.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/2005/Q4/f176.parquet], 
> ReadEntryWithPath [path=/tmp/DRILL_4589/1991/Q4/f150.parquet], 
> ReadEntryWithPath 

[GitHub] drill pull request #583: DRILL-4877: If pruning was not applicable only keep...

2016-09-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/583


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78177558
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
+map = Maps.newHashMap();
+jars.put(jName, map);
+
+for (Entry> entry : sNameMap.entrySet()) {
+  T sName = entry.getKey();
+  Pair pair = entry.getValue();
+  addFunction(jName, pair.getKey(), sName, pair.getValue());
+}
+  }
+}
+
+public void removeJar(T jName) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+removeAllByJar(jName);
+  }
+}
+
+public List getAllJarNames() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return Lists.newArrayList(jars.keySet());
+  }
+}
+
+public List getAllFunctionNames(T jName) {
+  try  (AutoCloseableLock lock = readLock.open()){
+Map map = jars.get(jName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.keySet());
+  }
+}
+
+public ListMultimap getAllFunctionsWithHolders() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().values()));
+}
+return multimap;
+  }
+}
+
+public ListMultimap getAllFunctionsWithSignatures() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().keySet()));
+}
+return multimap;
+  }
+}
+
+public List getHoldersByFunctionName(T fName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+Map map = functions.get(fName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.values());
+  }
+}
+
+public boolean containsJar(T jName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+return jars.containsKey(jName);
+  }
+}
+
+public int functionsSize() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return functions.size();
+  }
+}
+
+private void addFunction(T jName, T fName, T sName, U fHolder) {
+  Map map = jars.get(jName);
+
+  List list = map.get(fName);
+  if (list == null) {
+list = Lists.newArrayList();
+map.put(fName, list);
+  }
+
+  if (!list.contains(sName)) {
--- End diff --

You are right. Actually we don't expect any duplicates in jar, since we are 
adding jars only after validation.
I'll remove unnecessary checks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78177586
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
+map = Maps.newHashMap();
+jars.put(jName, map);
+
+for (Entry> entry : sNameMap.entrySet()) {
+  T sName = entry.getKey();
+  Pair pair = entry.getValue();
+  addFunction(jName, pair.getKey(), sName, pair.getValue());
+}
+  }
+}
+
+public void removeJar(T jName) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+removeAllByJar(jName);
+  }
+}
+
+public List getAllJarNames() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return Lists.newArrayList(jars.keySet());
+  }
+}
+
+public List getAllFunctionNames(T jName) {
+  try  (AutoCloseableLock lock = readLock.open()){
+Map map = jars.get(jName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.keySet());
+  }
+}
+
+public ListMultimap getAllFunctionsWithHolders() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().values()));
+}
+return multimap;
+  }
+}
+
+public ListMultimap getAllFunctionsWithSignatures() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().keySet()));
+}
+return multimap;
+  }
+}
+
+public List getHoldersByFunctionName(T fName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+Map map = functions.get(fName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.values());
+  }
+}
+
+public boolean containsJar(T jName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+return jars.containsKey(jName);
+  }
+}
+
+public int functionsSize() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return functions.size();
+  }
+}
+
+private void addFunction(T jName, T fName, T sName, U fHolder) {
+  Map map = jars.get(jName);
+
+  List list = map.get(fName);
+  if (list == null) {
+list = Lists.newArrayList();
+map.put(fName, list);
+  }
+
+  if (!list.contains(sName)) {
+list.add(sName);
+
+Map sigsMap = functions.get(fName);
+if (sigsMap == null) {
+  sigsMap = Maps.newHashMap();
+  functions.put(fName, sigsMap);
+}
+
+U u = sigsMap.get(sName);
--- End diff --

You are right. Actually we don't expect any duplicates in jar, since we are 
adding jars only after validation.
I'll remove unnecessary checks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, 

[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78177330
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
+map = Maps.newHashMap();
+jars.put(jName, map);
+
+for (Entry> entry : sNameMap.entrySet()) {
+  T sName = entry.getKey();
+  Pair pair = entry.getValue();
+  addFunction(jName, pair.getKey(), sName, pair.getValue());
+}
+  }
+}
+
+public void removeJar(T jName) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+removeAllByJar(jName);
+  }
+}
+
+public List getAllJarNames() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return Lists.newArrayList(jars.keySet());
+  }
+}
+
+public List getAllFunctionNames(T jName) {
+  try  (AutoCloseableLock lock = readLock.open()){
+Map map = jars.get(jName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.keySet());
+  }
+}
+
+public ListMultimap getAllFunctionsWithHolders() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().values()));
+}
+return multimap;
+  }
+}
+
+public ListMultimap getAllFunctionsWithSignatures() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().keySet()));
+}
+return multimap;
+  }
+}
+
+public List getHoldersByFunctionName(T fName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+Map map = functions.get(fName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.values());
+  }
+}
+
+public boolean containsJar(T jName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+return jars.containsKey(jName);
+  }
+}
+
+public int functionsSize() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return functions.size();
+  }
+}
+
+private void addFunction(T jName, T fName, T sName, U fHolder) {
+  Map map = jars.get(jName);
+
+  List list = map.get(fName);
+  if (list == null) {
+list = Lists.newArrayList();
+map.put(fName, list);
+  }
+
+  if (!list.contains(sName)) {
+list.add(sName);
+
+Map sigsMap = functions.get(fName);
--- End diff --

Yes, signature includes function name: MY_FUNC(VARCHAR-REQUIRED, 
INT-REQUIRED)
But I'll add examples in description so it would be clearer.
We don't support name spaces in Drill, so we are save here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78174691
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
+map = Maps.newHashMap();
+jars.put(jName, map);
+
+for (Entry> entry : sNameMap.entrySet()) {
+  T sName = entry.getKey();
+  Pair pair = entry.getValue();
+  addFunction(jName, pair.getKey(), sName, pair.getValue());
+}
+  }
+}
+
+public void removeJar(T jName) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+removeAllByJar(jName);
+  }
+}
+
+public List getAllJarNames() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return Lists.newArrayList(jars.keySet());
+  }
+}
+
+public List getAllFunctionNames(T jName) {
+  try  (AutoCloseableLock lock = readLock.open()){
+Map map = jars.get(jName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.keySet());
+  }
+}
+
+public ListMultimap getAllFunctionsWithHolders() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().values()));
+}
+return multimap;
+  }
+}
+
+public ListMultimap getAllFunctionsWithSignatures() {
+  try (AutoCloseableLock lock = readLock.open()) {
+ListMultimap multimap = ArrayListMultimap.create();
+for (Entry> entry : functions.entrySet()) {
+  multimap.putAll(entry.getKey(), 
Lists.newArrayList(entry.getValue().keySet()));
+}
+return multimap;
+  }
+}
+
+public List getHoldersByFunctionName(T fName) {
+  try (AutoCloseableLock lock = readLock.open()) {
+Map map = functions.get(fName);
+return map == null ? Lists.newArrayList() : 
Lists.newArrayList(map.values());
+  }
+}
+
+public boolean containsJar(T jName) {
--- End diff --

It's used during local validation only. Since we have remote registry in 
Zookeeper and such race condition happens, we'll fail during remote validation 
anyway.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78172686
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
+map = Maps.newHashMap();
+jars.put(jName, map);
+
+for (Entry> entry : sNameMap.entrySet()) {
+  T sName = entry.getKey();
+  Pair pair = entry.getValue();
+  addFunction(jName, pair.getKey(), sName, pair.getValue());
+}
+  }
+}
+
+public void removeJar(T jName) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+removeAllByJar(jName);
+  }
+}
+
+public List getAllJarNames() {
+  try (AutoCloseableLock lock = readLock.open()) {
+return Lists.newArrayList(jars.keySet());
+  }
+}
+
+public List getAllFunctionNames(T jName) {
--- End diff --

Agree. I suggest to go with geFunctionNamesByJar


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78172574
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
+map = Maps.newHashMap();
+jars.put(jName, map);
+
+for (Entry> entry : sNameMap.entrySet()) {
+  T sName = entry.getKey();
+  Pair pair = entry.getValue();
+  addFunction(jName, pair.getKey(), sName, pair.getValue());
--- End diff --

Agree.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78169406
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
+private final Map> functions;
+
+public GenericRegistryHolder() {
+  this.functions = Maps.newHashMap();
+  this.jars = Maps.newHashMap();
+}
+
+public void addJar(T jName, Map> sNameMap) {
+  try (AutoCloseableLock lock = writeLock.open()) {
+Map map = jars.get(jName);
+if (map != null) {
+  removeAllByJar(jName);
+}
--- End diff --

Agree.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78169085
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
+// jar name, Map
+private final Map> jars;
+
+// function name, Map
--- End diff --

Agree. I'll add description with example of structure.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #574: DRILL-4726: Dynamic UDFs support

2016-09-09 Thread arina-ielchiieva
Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/574#discussion_r78168984
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFunctionRegistry.java
 ---
@@ -218,4 +302,141 @@ private void 
registerOperatorsWithoutInference(DrillOperatorTable operatorTable)
   }
 }
   }
+
+  /**
+   * Function registry holder. Stores function implementations by jar 
name, function name.
+   */
+  private class GenericRegistryHolder {
+private final ReadWriteLock readWriteLock = new 
ReentrantReadWriteLock();
+private final AutoCloseableLock readLock = new 
AutoCloseableLock(readWriteLock.readLock());
+private final AutoCloseableLock writeLock = new 
AutoCloseableLock(readWriteLock.writeLock());
+
--- End diff --

I use ReentrantReadWriteLock to perform locking, and then wrap its read / 
write locks into AutoCloseableLock. It should work as documented [1]: allow 
multiple reads at the same time, restrict to one write at a time, disallow 
reads and writes at the same time.

[1] 
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #579: DRILL-4874: "No UserGroupInformation while generati...

2016-09-09 Thread vdiravka
Github user vdiravka closed the pull request at:

https://github.com/apache/drill/pull/579


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #584: DRILL-4884: Fix bug that drill sometimes produced I...

2016-09-09 Thread zbdzzg
GitHub user zbdzzg opened a pull request:

https://github.com/apache/drill/pull/584

DRILL-4884: Fix bug that drill sometimes produced IOB exception while 
querying data of 65536 limitation

Drill produces IOB while using a non batched scanner and limiting SQL by 
65536.

SQL:

```
select id from isearch.tmall_auction_cluster limit 1 offset 65535
```

Result:

```
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
 ~[classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:324)
 [classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184)
 [classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290)
 [classes/:na]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[classes/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.lang.IndexOutOfBoundsException: index: 131072, length: 2 
(expected: range(0, 131072))
at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:175) 
~[classes/:4.0.27.Final]
at io.netty.buffer.DrillBuf.chk(DrillBuf.java:197) 
~[classes/:4.0.27.Final]
at io.netty.buffer.DrillBuf.setChar(DrillBuf.java:517) 
~[classes/:4.0.27.Final]
at 
org.apache.drill.exec.record.selection.SelectionVector2.setIndex(SelectionVector2.java:79)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.limitWithNoSV(LimitRecordBatch.java:167)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.doWork(LimitRecordBatch.java:145)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
~[classes/:na]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
~[classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
 ~[classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
 ~[classes/:na]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_101]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_101]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 ~[hadoop-common-2.7.1.jar:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
 [classes/:na]
... 4 common 

[jira] [Created] (DRILL-4884) Drill produced IOB exception while querying data of 65536 limitation using non batched reader

2016-09-09 Thread Hongze Zhang (JIRA)
Hongze Zhang created DRILL-4884:
---

 Summary: Drill produced IOB exception while querying data of 65536 
limitation using non batched reader
 Key: DRILL-4884
 URL: https://issues.apache.org/jira/browse/DRILL-4884
 Project: Apache Drill
  Issue Type: Bug
  Components: Functions - Drill
Affects Versions: 1.8.0
 Environment: CentOS 6.5 / JAVA 8
Reporter: Hongze Zhang


Drill produces IOB while using a non batched scanner and limiting SQL by 65536.

SQL:
{noformat}
select id from isearch.tmall_auction_cluster limit 1 offset 65535
{noformat}

Result:
{noformat}
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534)
 ~[classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:324)
 [classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184)
 [classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290)
 [classes/:na]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[classes/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_101]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.lang.IndexOutOfBoundsException: index: 131072, length: 2 
(expected: range(0, 131072))
at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:175) 
~[classes/:4.0.27.Final]
at io.netty.buffer.DrillBuf.chk(DrillBuf.java:197) 
~[classes/:4.0.27.Final]
at io.netty.buffer.DrillBuf.setChar(DrillBuf.java:517) 
~[classes/:4.0.27.Final]
at 
org.apache.drill.exec.record.selection.SelectionVector2.setIndex(SelectionVector2.java:79)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.limitWithNoSV(LimitRecordBatch.java:167)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.doWork(LimitRecordBatch.java:145)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
 ~[classes/:na]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:215)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
~[classes/:na]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 ~[classes/:na]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
~[classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
 ~[classes/:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
 ~[classes/:na]
at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.8.0_101]
at javax.security.auth.Subject.doAs(Subject.java:422) 

[jira] [Resolved] (DRILL-4874) "No UserGroupInformation while generating ORC splits" - hive known issue in 1.2.0-mapr-1607 release.

2016-09-09 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva resolved DRILL-4874.
-
Resolution: Fixed

> "No UserGroupInformation while generating ORC splits" - hive known issue in 
> 1.2.0-mapr-1607 release.
> 
>
> Key: DRILL-4874
> URL: https://issues.apache.org/jira/browse/DRILL-4874
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.7.0
>Reporter: Vitalii Diravka
>Assignee: Vitalii Diravka
> Fix For: 1.9.0
>
>
> Need upgrade drill to 1.2.0-mapr-1608 hive.version where [hive issue 
> HIVE-13120|https://issues.apache.org/jira/browse/HIVE-13120] is fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)