[jira] [Commented] (DRILL-5070) Code gen: create methods in fixed order to allow test verification

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15737399#comment-15737399
 ] 

ASF GitHub Bot commented on DRILL-5070:
---

Github user paul-rogers closed the pull request at:

https://github.com/apache/drill/pull/684


> Code gen: create methods in fixed order to allow test verification
> --
>
> Key: DRILL-5070
> URL: https://issues.apache.org/jira/browse/DRILL-5070
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> A handy technique in testing is to compare generated code against a "golden" 
> copy that defines the expected results. However, at present, Drill generates 
> code using the method order returned by {{Class.getDeclaredMethods}}, but 
> this method makes no guarantee about the order of the methods. The order 
> varies from one run to the next. There is some evidence [this 
> link|http://stackoverflow.com/questions/28585843/java-reflection-getdeclaredmethods-in-declared-order-strange-behaviour]
>  that order can vary even within a single run, though a quick test was unable 
> to reproduce this case.
> If method order does indeed vary within a single run, then the order can 
> impact the Drill code cache since it compares the sources from two different 
> generation events to detect duplicate code.
> This issue appeared when attempting to modify tests to capture generated code 
> for comparison to future results. Even a simple generated case from 
> {{ExpressionTest.testBasicExpression()}} that generates {{if(true) then 1 
> else 0 end}} (all constants) produced methods in different orders on each 
> test run.
> The fix is simple, in the {{SignatureHolder}} constructor, sort methods by 
> name after retrieving them from the class. The sort ensures that method order 
> is deterministic. Fortunately, the number of methods is small, so the sort 
> step adds little cost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5070) Code gen: create methods in fixed order to allow test verification

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15737398#comment-15737398
 ] 

ASF GitHub Bot commented on DRILL-5070:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/684
  
Closing pull request since we don't want to have golden copies of generated 
code.


> Code gen: create methods in fixed order to allow test verification
> --
>
> Key: DRILL-5070
> URL: https://issues.apache.org/jira/browse/DRILL-5070
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> A handy technique in testing is to compare generated code against a "golden" 
> copy that defines the expected results. However, at present, Drill generates 
> code using the method order returned by {{Class.getDeclaredMethods}}, but 
> this method makes no guarantee about the order of the methods. The order 
> varies from one run to the next. There is some evidence [this 
> link|http://stackoverflow.com/questions/28585843/java-reflection-getdeclaredmethods-in-declared-order-strange-behaviour]
>  that order can vary even within a single run, though a quick test was unable 
> to reproduce this case.
> If method order does indeed vary within a single run, then the order can 
> impact the Drill code cache since it compares the sources from two different 
> generation events to detect duplicate code.
> This issue appeared when attempting to modify tests to capture generated code 
> for comparison to future results. Even a simple generated case from 
> {{ExpressionTest.testBasicExpression()}} that generates {{if(true) then 1 
> else 0 end}} (all constants) produced methods in different orders on each 
> test run.
> The fix is simple, in the {{SignatureHolder}} constructor, sort methods by 
> name after retrieving them from the class. The sort ensures that method order 
> is deterministic. Fortunately, the number of methods is small, so the sort 
> step adds little cost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15737395#comment-15737395
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/665
  
Used a generated serial id for UserException (even though it is never 
serialized.) Remove the code cleanup. Changed code to use a string builder.


> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5043) Function that returns a unique id per session/connection similar to MySQL's CONNECTION_ID()

2016-12-09 Thread Nagarajan Chinnasamy (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736972#comment-15736972
 ] 

Nagarajan Chinnasamy commented on DRILL-5043:
-

Yes... Got this feedback from reviews on GitHub. Looking at 
https://issues.apache.org/jira/browse/DRILL-4956. Thanks.

> Function that returns a unique id per session/connection similar to MySQL's 
> CONNECTION_ID()
> ---
>
> Key: DRILL-5043
> URL: https://issues.apache.org/jira/browse/DRILL-5043
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.8.0
>Reporter: Nagarajan Chinnasamy
>Priority: Minor
>  Labels: CONNECTION_ID, SESSION, UDF
> Attachments: 01_session_id_sqlline.png, 
> 02_session_id_webconsole_query.png, 03_session_id_webconsole_result.png
>
>
> Design and implement a function that returns a unique id per 
> session/connection similar to MySQL's CONNECTION_ID().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736881#comment-15736881
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91823124
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/client/DrillClient.java ---
@@ -357,10 +357,54 @@ protected void afterExecute(final Runnable r, final 
Throwable t) {
 super.afterExecute(r, t);
   }
 };
-client = new UserClient(clientName, config, supportComplexTypes, 
allocator, eventLoopGroup, executor);
-logger.debug("Connecting to server {}:{}", endpoint.getAddress(), 
endpoint.getUserPort());
-connect(endpoint);
-connected = true;
+
+// "tries" is max number of unique drillbit to try connecting until 
successfully connected to one of them
+final String connectTriesConf = (props != null) ? 
props.getProperty("tries", "5") : "5";
+
+int connectTriesVal;
+try {
+  connectTriesVal = Math.min(endpoints.size(), 
Integer.parseInt(connectTriesConf));
+} catch (NumberFormatException e) {
+  throw new InvalidConnectionInfoException("Invalid tries value: " + 
connectTriesConf + " specified in " +
+   "connection string");
+}
+
+// If the value provided in the connection string is <=0 then override 
with 1 since we want to try connecting
+// at least once
+connectTriesVal = Math.max(1, connectTriesVal);
+
+int triedEndpointIndex = 0;
+DrillbitEndpoint endpoint;
+
+while (triedEndpointIndex < connectTriesVal) {
+  client = new UserClient(clientName, config, supportComplexTypes, 
allocator, eventLoopGroup, executor);
+  endpoint = endpoints.get(triedEndpointIndex);
+  logger.debug("Connecting to server {}:{}", endpoint.getAddress(), 
endpoint.getUserPort());
+
+  try {
+connect(endpoint);
+connected = true;
+logger.info("Successfully connected to server {}:{}", 
endpoint.getAddress(), endpoint.getUserPort());
+break;
+  } catch (InvalidConnectionInfoException ex) {
+logger.error("Connection to {}:{} failed with error {}. Not 
retrying anymore", endpoint.getAddress(),
+ endpoint.getUserPort(), ex.getMessage());
+throw ex;
+  } catch (RpcException ex) {
+++triedEndpointIndex;
+logger.error("Attempt {}: Failed to connect to server {}:{}", 
triedEndpointIndex, endpoint.getAddress(),
+ endpoint.getUserPort());
+
+// Close the connection
+if (client.isActive()) {
--- End diff --

The loop recreates a new client anyway. So this should be closed, 
regardless of being active.


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736860#comment-15736860
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91821466
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/client/ConnectTriesPropertyTestClusterBits.java
 ---
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.client;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.ExecutionException;
+
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.exec.ZookeeperHelper;
+import org.apache.drill.exec.coord.ClusterCoordinator;
+import org.apache.drill.exec.exception.DrillbitStartupException;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.rpc.InvalidConnectionInfoException;
+import org.apache.drill.exec.rpc.RpcException;
+import org.apache.drill.exec.server.Drillbit;
+
+import org.apache.drill.exec.server.RemoteServiceSet;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static junit.framework.TestCase.assertTrue;
+import static junit.framework.TestCase.fail;
+
+public class ConnectTriesPropertyTestClusterBits {
+
+  public static StringBuilder bitInfo;
+  public static final String fakeBitsInfo = 
"127.0.0.1:5000,127.0.0.1:5001";
+  public static List drillbits;
+  public static final int drillBitCount = 1;
+  public static ZookeeperHelper zkHelper;
+  public static RemoteServiceSet remoteServiceSet;
+  public static DrillConfig drillConfig;
+
+  @BeforeClass
+  public static void testSetUp() throws Exception {
+remoteServiceSet = RemoteServiceSet.getLocalServiceSet();
+zkHelper = new ZookeeperHelper();
+zkHelper.startZookeeper(1);
+
+// Creating Drillbits
+drillConfig = zkHelper.getConfig();
+try {
+  int drillBitStarted = 0;
+  drillbits = new ArrayList<>();
+  while(drillBitStarted < drillBitCount){
+drillbits.add(Drillbit.start(drillConfig, remoteServiceSet));
+++drillBitStarted;
+  }
+} catch (DrillbitStartupException e) {
+  throw new RuntimeException("Failed to start drillbits.", e);
+}
+bitInfo = new StringBuilder();
+
+for (int i = 0; i < drillBitCount; ++i) {
+  final DrillbitEndpoint currentEndPoint = 
drillbits.get(i).getContext().getEndpoint();
+  final String currentBitIp = currentEndPoint.getAddress();
+  final int currentBitPort = currentEndPoint.getUserPort();
+  bitInfo.append(",");
+  bitInfo.append(currentBitIp);
+  bitInfo.append(":");
+  bitInfo.append(currentBitPort);
+}
+  }
+
+  @AfterClass
+  public static void testCleanUp(){
+for(int i=0; i < drillBitCount; ++i){
+  drillbits.get(i).close();
--- End diff --

Fixed


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to

[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736859#comment-15736859
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91822093
  
--- Diff: 
exec/jdbc/src/test/java/org/apache/drill/jdbc/test/JdbcConnectTriesTestEmbeddedBits.java
 ---
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.jdbc.test;
+
+import org.apache.drill.exec.rpc.InvalidConnectionInfoException;
+import org.apache.drill.exec.rpc.RpcException;
+import org.apache.drill.jdbc.Driver;
+import org.apache.drill.jdbc.JdbcTestBase;
+
+import org.junit.Test;
+
+import java.sql.SQLException;
+import java.sql.Connection;
+
+import java.util.concurrent.ExecutionException;
+
+import static junit.framework.Assert.assertNotNull;
+import static junit.framework.TestCase.fail;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+public class JdbcConnectTriesTestEmbeddedBits extends JdbcTestBase {
+
+  @Test
+  public void testDirectConnectionConnectTriesEqualsDrillbitCount() throws 
SQLException {
+Connection connection = null;
+try {
+  connection = new 
Driver().connect("jdbc:drill:drillbit=127.0.0.1:5000,127.0.0.1:5001;" + 
"tries=2",
--- End diff --

Fixed


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736861#comment-15736861
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91820952
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/client/DrillClient.java ---
@@ -357,10 +357,54 @@ protected void afterExecute(final Runnable r, final 
Throwable t) {
 super.afterExecute(r, t);
   }
 };
-client = new UserClient(clientName, config, supportComplexTypes, 
allocator, eventLoopGroup, executor);
-logger.debug("Connecting to server {}:{}", endpoint.getAddress(), 
endpoint.getUserPort());
-connect(endpoint);
-connected = true;
+
+// "tries" is max number of unique drillbit to try connecting until 
successfully connected to one of them
+final String connectTriesConf = (props != null) ? 
props.getProperty("tries", "5") : "5";
+
+int connectTriesVal;
+try {
+  connectTriesVal = Math.min(endpoints.size(), 
Integer.parseInt(connectTriesConf));
+} catch (NumberFormatException e) {
+  throw new InvalidConnectionInfoException("Invalid tries value: " + 
connectTriesConf + " specified in " +
+   "connection string");
+}
+
+// If the value provided in the connection string is <=0 then override 
with 1 since we want to try connecting
+// at least once
+connectTriesVal = Math.max(1, connectTriesVal);
+
+int triedEndpointIndex = 0;
+DrillbitEndpoint endpoint;
+
+while (triedEndpointIndex < connectTriesVal) {
+  client = new UserClient(clientName, config, supportComplexTypes, 
allocator, eventLoopGroup, executor);
+  endpoint = endpoints.get(triedEndpointIndex);
+  logger.debug("Connecting to server {}:{}", endpoint.getAddress(), 
endpoint.getUserPort());
+
+  try {
+connect(endpoint);
+connected = true;
+logger.info("Successfully connected to server {}:{}", 
endpoint.getAddress(), endpoint.getUserPort());
+break;
+  } catch (InvalidConnectionInfoException ex) {
+logger.error("Connection to {}:{} failed with error {}. Not 
retrying anymore", endpoint.getAddress(),
+ endpoint.getUserPort(), ex.getMessage());
+throw ex;
+  } catch (RpcException ex) {
+++triedEndpointIndex;
+logger.error("Attempt {}: Failed to connect to server {}:{}", 
triedEndpointIndex, endpoint.getAddress(),
+ endpoint.getUserPort());
+
+// Close the connection
+if (client.isActive()) {
--- End diff --

Here client is userClient and "isActive" check for all the below conditions:

return connection != null
&& connection.getChannel() != null
&& connection.getChannel().isActive();

and close --> closes the corresponding channel.

Since we are retrying so other resources will be reused for next connection 
attempt. After all retries are exhausted those will be closed by the 
DrillClient.close method.


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> 

[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736797#comment-15736797
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/665#discussion_r91820335
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java
 ---
@@ -224,6 +224,7 @@ public void run() {
   ImpersonationUtil.getProcessUserUGI();
 
   queryUserUgi.doAs(new PrivilegedExceptionAction() {
+@Override
--- End diff --

Please undo these overrides. Although useful, these are unrelated to the 
issue. Maybe another ticket?


> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736796#comment-15736796
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/665#discussion_r91820261
  
--- Diff: 
common/src/main/java/org/apache/drill/common/exceptions/UserException.java ---
@@ -549,7 +550,12 @@ public UserException build(final Logger logger) {
   if (isSystemError) {
 logger.error(newException.getMessage(), newException);
   } else {
-logger.info("User Error Occurred", newException);
+String msg = "User Error Occurred";
+if (message != null) {
+  msg += ": " + message; }
--- End diff --

`}` on new line, here and below.


> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5056) UserException does not write full message to log

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736798#comment-15736798
 ] 

ASF GitHub Bot commented on DRILL-5056:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/665#discussion_r91820210
  
--- Diff: 
common/src/main/java/org/apache/drill/common/exceptions/UserException.java ---
@@ -549,7 +550,12 @@ public UserException build(final Logger logger) {
   if (isSystemError) {
 logger.error(newException.getMessage(), newException);
   } else {
-logger.info("User Error Occurred", newException);
+String msg = "User Error Occurred";
--- End diff --

Use StringBuilder?


> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736784#comment-15736784
 ] 

ASF GitHub Bot commented on DRILL-5112:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/681
  
+1


> Unit tests derived from PopUnitTestBase fail in IDE due to config errors
> 
>
> Key: DRILL-5112
> URL: https://issues.apache.org/jira/browse/DRILL-5112
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
>
> Drill provides a wide variety of unit tests. Many derive from 
> {{PopUnitTestBase}} to test the Physical OPerators.
> The tests use a default configuration:
> {code}
> protected static DrillConfig CONFIG;
>   @BeforeClass
>   public static void setup() {
> CONFIG = DrillConfig.create();
>   }
> {code}
> The tests rely on config settings specified in the {{pom.xml}} file (see note 
> below.) When run in Eclipse, no such config exists, so the tests use only the 
> default config. The defaults allow a web server to be started.
> Many tests start multiple Drillbits using the above config. When this occurs, 
> each tries to start a web server. The second one fails because the HTTP port 
> is already in use.
> The solution is to initialize the config using the same settings as used in 
> the {{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.
> As an aside, having multiple ways to set up the Drill config (and other 
> items) leads to much wasted time as each engineer must learn the quirks of 
> each test hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5044) After the dynamic registration of multiple jars simultaneously not all UDFs were registered

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736769#comment-15736769
 ] 

ASF GitHub Bot commented on DRILL-5044:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/669
  
+1


> After the dynamic registration of multiple jars simultaneously not all UDFs 
> were registered
> ---
>
> Key: DRILL-5044
> URL: https://issues.apache.org/jira/browse/DRILL-5044
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.9.0
>Reporter: Roman
>Assignee: Arina Ielchiieva
>  Labels: ready-to-commit
>
> I tried to register 21 jars simultaneously (property 'udf.retry-attempts' = 
> 30) and not all jars were registered. As I see in output, all function were 
> registered and /staging directory was empty, but not all of jars were moved 
> into /registry directory. 
> For example, after simultaneously registration I saw "The following UDFs in 
> jar test-1.1.jar have been registered: [test1(VARCHAR-REQUIRED)" message, but 
> this jar was not in /registry directory. When I tried to run function test1, 
> I got this error: "Error: SYSTEM ERROR: SqlValidatorException: No match found 
> for function signature test1()". And when I tried to reregister 
> this jar, I got "Jar with test-1.1.jar name has been already registered".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5108) Reduce output from Maven git-commit-id-plugin

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736772#comment-15736772
 ] 

ASF GitHub Bot commented on DRILL-5108:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/680
  
+1


> Reduce output from Maven git-commit-id-plugin
> -
>
> Key: DRILL-5108
> URL: https://issues.apache.org/jira/browse/DRILL-5108
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> The git-commit-id-plugin grabs information from Git to display during a 
> build. It prints many e-mail addresses and other generic project information. 
> As part of the effort to trim down unit test output, we propose to turn off 
> the verbose output from this plugin.
> Specific change:
> {code}
>   
> pl.project13.maven
> git-commit-id-plugin
> ...
> 
>  false
> {code}
> That is, change the verbose setting from true to false.
> In the unlikely event that some build process depends on the verbose output, 
> we can make the setting a configurable parameter, defaulting to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5121) A memory leak is observed when exact case is not specified for a column in a filter condition

2016-12-09 Thread Karthikeyan Manivannan (JIRA)
Karthikeyan Manivannan created DRILL-5121:
-

 Summary: A memory leak is observed when exact case is not 
specified for a column in a filter condition
 Key: DRILL-5121
 URL: https://issues.apache.org/jira/browse/DRILL-5121
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Relational Operators
Affects Versions: 1.8.0, 1.6.0
Reporter: Karthikeyan Manivannan
Assignee: Karthikeyan Manivannan
 Fix For: Future


When the query SELECT XYZ from dfs.`/tmp/foo' where xYZ like "abc", is executed 
on a setup where /tmp/foo has 2 Parquet files, 1.parquet and 2.parquet, where 
1.parquet has the column XYZ but 2.parquet does not, then there is a memory 
leak. 

This seems to happen because xYZ seem to be treated as a new column. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5091) JDBC unit test fail on Java 8

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736721#comment-15736721
 ] 

ASF GitHub Bot commented on DRILL-5091:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/676
  
Opened DRILL-5120 for the JDBC Driver update for JDBC 4.2. Added TODOs as 
requested. Squashed commits to simplify commit to master.


> JDBC unit test fail on Java 8
> -
>
> Key: DRILL-5091
> URL: https://issues.apache.org/jira/browse/DRILL-5091
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Java 8
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
>
> Run the {{TestJDBCQuery}} unit tests. They will fail with errors relating to 
> the default name space.
> The problem is due to a failure (that is ignored, DRILL-5090) to set up the 
> test DFS name space.
> The "dfs_test" storage plugin is not found in the plugin registry, resulting 
> in a null object and NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5120) Upgrade JDBC Driver for new Java 8 methods

2016-12-09 Thread Paul Rogers (JIRA)
Paul Rogers created DRILL-5120:
--

 Summary: Upgrade JDBC Driver for new Java 8 methods
 Key: DRILL-5120
 URL: https://issues.apache.org/jira/browse/DRILL-5120
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Affects Versions: 1.8.0
Reporter: Paul Rogers
Priority: Minor


Java 8 has been released for some time. Included in Java 8 is a new version of 
the JDBC interface: JDBC 4.2. Consult the JDBC spec for details.

The JDBC unit tests were modified to pass with the default JDBC 4.2 
implementations. The "known not implemented" code (marked with TODO) should be 
replaced to test the real implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread Abhishek Girish (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Girish updated DRILL-5119:
---
Labels: ready-to-commit  (was: )

> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
>  Labels: ready-to-commit
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736600#comment-15736600
 ] 

ASF GitHub Bot commented on DRILL-5119:
---

Github user adityakishore commented on the issue:

https://github.com/apache/drill/pull/688
  
LGTM.


> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736598#comment-15736598
 ] 

ASF GitHub Bot commented on DRILL-5119:
---

Github user spanchamiamapr commented on the issue:

https://github.com/apache/drill/pull/688
  
LGTM


> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5108) Reduce output from Maven git-commit-id-plugin

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736586#comment-15736586
 ] 

ASF GitHub Bot commented on DRILL-5108:
---

Github user paul-rogers commented on the issue:

https://github.com/apache/drill/pull/680
  
The non-verbose output is, indeed, blank. The value of the plugin seems to 
be that it creates Maven variables that can be used elsewhere in the POM for 
various purposes. Dumping to console is an "extra."


> Reduce output from Maven git-commit-id-plugin
> -
>
> Key: DRILL-5108
> URL: https://issues.apache.org/jira/browse/DRILL-5108
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> The git-commit-id-plugin grabs information from Git to display during a 
> build. It prints many e-mail addresses and other generic project information. 
> As part of the effort to trim down unit test output, we propose to turn off 
> the verbose output from this plugin.
> Specific change:
> {code}
>   
> pl.project13.maven
> git-commit-id-plugin
> ...
> 
>  false
> {code}
> That is, change the verbose setting from true to false.
> In the unlikely event that some build process depends on the verbose output, 
> we can make the setting a configurable parameter, defaulting to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5056) UserException does not write full message to log

2016-12-09 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5056:
---
Labels: ready-to-commit  (was: )

> UserException does not write full message to log
> 
>
> Key: DRILL-5056
> URL: https://issues.apache.org/jira/browse/DRILL-5056
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> A case occurred in which the External Sort failed during spilling. All that 
> was written to the log was:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error Occurred
> {code}
> As it turns out, there are two places in external sort that can throw a user 
> exception. But, because the log contains neither line numbers nor detailed 
> messages, it is not obvious which one was the cause.
> When logging a user error, include the text provided when building the error. 
> For example, consider the following external sort code:
> {code}
>   throw UserException.resourceError(e)
> .message("External Sort encountered an error while spilling to disk")
>   .addContext(e.getMessage() /* more detail */)
> .build(logger);
> {code}
> The expected message is:
> {code}
> 2016-11-18 ... INFO  o.a.d.e.p.i.xsort.ExternalSortBatch - User Error 
> Occurred: External Sort encountered an error while spilling to disk (Disk 
> write failed) 
> {code}
> The part in parens is the cause of the error: the {{e.getMessage( )}} in the 
> above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5112) Unit tests derived from PopUnitTestBase fail in IDE due to config errors

2016-12-09 Thread Paul Rogers (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Rogers updated DRILL-5112:
---
Labels: ready-to-commit  (was: )

> Unit tests derived from PopUnitTestBase fail in IDE due to config errors
> 
>
> Key: DRILL-5112
> URL: https://issues.apache.org/jira/browse/DRILL-5112
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
>
> Drill provides a wide variety of unit tests. Many derive from 
> {{PopUnitTestBase}} to test the Physical OPerators.
> The tests use a default configuration:
> {code}
> protected static DrillConfig CONFIG;
>   @BeforeClass
>   public static void setup() {
> CONFIG = DrillConfig.create();
>   }
> {code}
> The tests rely on config settings specified in the {{pom.xml}} file (see note 
> below.) When run in Eclipse, no such config exists, so the tests use only the 
> default config. The defaults allow a web server to be started.
> Many tests start multiple Drillbits using the above config. When this occurs, 
> each tries to start a web server. The second one fails because the HTTP port 
> is already in use.
> The solution is to initialize the config using the same settings as used in 
> the {{BaseTestQuery}} test case: the unit tests then work fine in Eclipse.
> As an aside, having multiple ways to set up the Drill config (and other 
> items) leads to much wasted time as each engineer must learn the quirks of 
> each test hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736562#comment-15736562
 ] 

ASF GitHub Bot commented on DRILL-5119:
---

Github user Agirish commented on the issue:

https://github.com/apache/drill/pull/688
  
@spanchamiamapr, @adityakishore can one of you please review this change?


> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736538#comment-15736538
 ] 

ASF GitHub Bot commented on DRILL-5119:
---

Github user Agirish commented on the issue:

https://github.com/apache/drill/pull/688
  
+1 (non-binding). Thanks for making the change. 


> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread Patrick Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736526#comment-15736526
 ] 

Patrick Wong commented on DRILL-5119:
-

PR created: https://github.com/apache/drill/pull/688

> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736525#comment-15736525
 ] 

ASF GitHub Bot commented on DRILL-5119:
---

GitHub user pwong-mapr opened a pull request:

https://github.com/apache/drill/pull/688

DRILL-5119 - Update MapR version to 5.2.0.40963-mapr

Change for https://issues.apache.org/jira/browse/DRILL-5119

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pwong-mapr/incubator-drill patch-4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/688.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #688


commit c3f03b186e786dc92ad88ee459c93e581fc52c26
Author: Patrick Wong 
Date:   2016-12-09T22:26:31Z

DRILL-5119 - Update MapR version to 5.2.0.40963-mapr




> Update MapR version to 5.2.0.40963-mapr
> ---
>
> Key: DRILL-5119
> URL: https://issues.apache.org/jira/browse/DRILL-5119
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Tools, Build & Test
>Affects Versions: 1.10.0
>Reporter: Abhishek Girish
>Assignee: Patrick Wong
> Fix For: 1.10.0
>
>
> This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-4996) Parquet Date auto-correction is not working in auto-partitioned parquet files generated by drill-1.6

2016-12-09 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-4996:

Reviewer: Padma Penumarthy

Assigned code reviewer to [~ppenumarthy]

> Parquet Date auto-correction is not working in auto-partitioned parquet files 
> generated by drill-1.6
> 
>
> Key: DRILL-4996
> URL: https://issues.apache.org/jira/browse/DRILL-4996
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Critical
> Attachments: item.tgz
>
>
> git.commit.id.abbrev=4ee1d4c
> Below are the steps I followed to generate the data :
> {code}
> 1. Generate a parquet file with date column using hive1.2
> 2. Use drill 1.6 to create auto-partitioned parquet files partitioned on the 
> date column
> {code}
> Now the below query returns wrong results :
> {code}
> select i_rec_start_date, i_size from 
> dfs.`/drill/testdata/parquet_date/auto_partition/item_multipart_autorefresh`  
> group by i_rec_start_date, i_size;
> +---+--+
> | i_rec_start_date  |i_size|
> +---+--+
> | null  | large|
> | 366-11-08| extra large  |
> | 366-11-08| medium   |
> | null  | medium   |
> | 366-11-08| petite   |
> | 364-11-07| medium   |
> | null  | petite   |
> | 365-11-07| medium   |
> | 368-11-07| economy  |
> | 365-11-07| large|
> | 365-11-07| small|
> | 366-11-08| small|
> | 365-11-07| extra large  |
> | 364-11-07| N/A  |
> | 366-11-08| economy  |
> | 366-11-08| large|
> | 364-11-07| small|
> | null  | small|
> | 364-11-07| large|
> | 364-11-07| extra large  |
> | 368-11-07| N/A  |
> | 368-11-07| extra large  |
> | 368-11-07| large|
> | 365-11-07| petite   |
> | null  | N/A  |
> | 365-11-07| economy  |
> | 364-11-07| economy  |
> | 364-11-07| petite   |
> | 365-11-07| N/A  |
> | 368-11-07| medium   |
> | null  | extra large  |
> | 368-11-07| small|
> | 368-11-07| petite   |
> | 366-11-08| N/A  |
> +---+--+
> 34 rows selected (0.691 seconds)
> {code}
> However I tried generating the auto-partitioned parquet files using Drill 1.2 
> and then the above query returned the right results.
> I attached the required data sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5041) Make SingleRowListener (and other utilities) published public classes

2016-12-09 Thread Chris Westin (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Westin updated DRILL-5041:

Component/s: Client - Java

> Make SingleRowListener (and other utilities) published public classes
> -
>
> Key: DRILL-5041
> URL: https://issues.apache.org/jira/browse/DRILL-5041
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - Java
>Affects Versions: 1.8.0
> Environment: This is actually for the Java Client, but there's no 
> such component.
>Reporter: Chris Westin
>Assignee: Chris Westin
>
> I have an application that uses the DrillClient interface (Specifically, an 
> implementation of OJAI), and it would have been convenient to use things like 
> SingleRowListener in my implementation, but they are not available outside 
> the Drill project. There are many such utilities that are used in unit tests 
> that would be useful to external API users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4996) Parquet Date auto-correction is not working in auto-partitioned parquet files generated by drill-1.6

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736437#comment-15736437
 ] 

ASF GitHub Bot commented on DRILL-4996:
---

GitHub user vdiravka opened a pull request:

https://github.com/apache/drill/pull/687

DRILL-4996: Parquet Date auto-correction is not working in auto-parti…

…tioned parquet files generated by drill-1.6

- Changed detection approach of corrupted date values for the case, when 
parquet files are generated by drill:
  the corruption status is determined by looking at the min/max values in 
the metadata;
- Appropriate refactoring of TestCorruptParquetDateCorrection.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vdiravka/drill DRILL-4996

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/687.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #687


commit 22f4c3dcbe00d47055185d802e88fecaff0d252d
Author: Vitalii Diravka 
Date:   2016-12-09T08:00:48Z

DRILL-4996: Parquet Date auto-correction is not working in auto-partitioned 
parquet files generated by drill-1.6
- Changed detection approach of corrupted date values for the case, when 
parquet files are generated by drill:
  the corruption status is determined by looking at the min/max values in 
the metadata;
- Appropriate refactoring of TestCorruptParquetDateCorrection.




> Parquet Date auto-correction is not working in auto-partitioned parquet files 
> generated by drill-1.6
> 
>
> Key: DRILL-4996
> URL: https://issues.apache.org/jira/browse/DRILL-4996
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Parquet
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Critical
> Attachments: item.tgz
>
>
> git.commit.id.abbrev=4ee1d4c
> Below are the steps I followed to generate the data :
> {code}
> 1. Generate a parquet file with date column using hive1.2
> 2. Use drill 1.6 to create auto-partitioned parquet files partitioned on the 
> date column
> {code}
> Now the below query returns wrong results :
> {code}
> select i_rec_start_date, i_size from 
> dfs.`/drill/testdata/parquet_date/auto_partition/item_multipart_autorefresh`  
> group by i_rec_start_date, i_size;
> +---+--+
> | i_rec_start_date  |i_size|
> +---+--+
> | null  | large|
> | 366-11-08| extra large  |
> | 366-11-08| medium   |
> | null  | medium   |
> | 366-11-08| petite   |
> | 364-11-07| medium   |
> | null  | petite   |
> | 365-11-07| medium   |
> | 368-11-07| economy  |
> | 365-11-07| large|
> | 365-11-07| small|
> | 366-11-08| small|
> | 365-11-07| extra large  |
> | 364-11-07| N/A  |
> | 366-11-08| economy  |
> | 366-11-08| large|
> | 364-11-07| small|
> | null  | small|
> | 364-11-07| large|
> | 364-11-07| extra large  |
> | 368-11-07| N/A  |
> | 368-11-07| extra large  |
> | 368-11-07| large|
> | 365-11-07| petite   |
> | null  | N/A  |
> | 365-11-07| economy  |
> | 364-11-07| economy  |
> | 364-11-07| petite   |
> | 365-11-07| N/A  |
> | 368-11-07| medium   |
> | null  | extra large  |
> | 368-11-07| small|
> | 368-11-07| petite   |
> | 366-11-08| N/A  |
> +---+--+
> 34 rows selected (0.691 seconds)
> {code}
> However I tried generating the auto-partitioned parquet files using Drill 1.2 
> and then the above query returned the right results.
> I attached the required data sets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5119) Update MapR version to 5.2.0.40963-mapr

2016-12-09 Thread Abhishek Girish (JIRA)
Abhishek Girish created DRILL-5119:
--

 Summary: Update MapR version to 5.2.0.40963-mapr
 Key: DRILL-5119
 URL: https://issues.apache.org/jira/browse/DRILL-5119
 Project: Apache Drill
  Issue Type: Bug
  Components: Tools, Build & Test
Affects Versions: 1.10.0
Reporter: Abhishek Girish
Assignee: Patrick Wong
 Fix For: 1.10.0


This is for the "mapr" profile. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5048) AssertionError when case statement is used with timestamp and null

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736393#comment-15736393
 ] 

ASF GitHub Bot commented on DRILL-5048:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/657
  
+1


> AssertionError when case statement is used with timestamp and null
> --
>
> Key: DRILL-5048
> URL: https://issues.apache.org/jira/browse/DRILL-5048
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
>  Labels: ready-to-commit
> Fix For: Future
>
>
> AssertionError when we use case with timestamp and null:
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> SELECT res, CASE res WHEN true THEN 
> CAST('1990-10-10 22:40:50' AS TIMESTAMP) ELSE null END
> . . . . . . . . . . . . . . > FROM
> . . . . . . . . . . . . . . > (
> . . . . . . . . . . . . . . > SELECT
> . . . . . . . . . . . . . . > (CASE WHEN (false) THEN null ELSE 
> CAST('1990-10-10 22:40:50' AS TIMESTAMP) END) res
> . . . . . . . . . . . . . . > FROM (values(1)) foo
> . . . . . . . . . . . . . . > ) foobar;
> Error: SYSTEM ERROR: AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(TIMESTAMP(0) NOT NULL res, TIMESTAMP(0) EXPR$1) NOT NULL
> rowtype of set:
> RecordType(TIMESTAMP(0) res, TIMESTAMP(0) EXPR$1) NOT NULL
> [Error Id: b56e0a4d-2f9e-4afd-8c60-5bc2f9d31f8f on centos-01.qa.lab:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of new rel:
> RecordType(TIMESTAMP(0) NOT NULL res, TIMESTAMP(0) EXPR$1) NOT NULL
> rowtype of set:
> RecordType(TIMESTAMP(0) res, TIMESTAMP(0) EXPR$1) NOT NULL
> at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1696) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.plan.volcano.RelSubset.add(RelSubset.java:295) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.plan.volcano.RelSet.add(RelSet.java:147) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.addRelToSet(VolcanoPlanner.java:1818)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1760)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:1017)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1037)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1940)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:138)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> ... 16 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5085) Add / update description for dynamic UDFs directories in drill-env.sh and drill-module.conf

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736387#comment-15736387
 ] 

ASF GitHub Bot commented on DRILL-5085:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/672
  
+1


> Add / update description for dynamic UDFs directories in drill-env.sh and 
> drill-module.conf
> ---
>
> Key: DRILL-5085
> URL: https://issues.apache.org/jira/browse/DRILL-5085
> Project: Apache Drill
>  Issue Type: Improvement
>Affects Versions: 1.9.0
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Minor
>  Labels: ready-to-commit
> Fix For: 1.10.0
>
>
> 1. Add description for $DRILL_TMP_DIR in drill-env.sh
> 2. Update description for dynamic UDFs directories in drill-module.conf
> 3. Add dynamic UDFs settings in drill-override-example.conf
> 4. Add additional logging during udf areas creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5091) JDBC unit test fail on Java 8

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736380#comment-15736380
 ] 

ASF GitHub Bot commented on DRILL-5091:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/676
  
+1, pending minor comment


> JDBC unit test fail on Java 8
> -
>
> Key: DRILL-5091
> URL: https://issues.apache.org/jira/browse/DRILL-5091
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Java 8
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
>
> Run the {{TestJDBCQuery}} unit tests. They will fail with errors relating to 
> the default name space.
> The problem is due to a failure (that is ignored, DRILL-5090) to set up the 
> test DFS name space.
> The "dfs_test" storage plugin is not found in the plugin registry, resulting 
> in a null object and NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5091) JDBC unit test fail on Java 8

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736378#comment-15736378
 ] 

ASF GitHub Bot commented on DRILL-5091:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/676#discussion_r91797180
  
--- Diff: 
exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2489CallsAfterCloseThrowExceptionsTest.java
 ---
@@ -477,18 +468,32 @@ public void testClosedConnectionMethodsThrowRight() {
 }
 
 @Override
+protected boolean isOkayNonthrowingMethod(Method method) {
+  // Java 8 method
+  if ("getLargeUpdateCount".equals(method.getName())) {
+return true; }
+  return super.isOkayNonthrowingMethod(method);
+}
+
+@Override
 protected boolean isOkaySpecialCaseException(Method method, Throwable 
cause) {
   final boolean result;
   if (super.isOkaySpecialCaseException(method, cause)) {
 result = true;
   }
+  else if (   method.getName().equals("executeLargeBatch")
+   || method.getName().equals("executeLargeUpdate")) {
+// New Java 8 methods not implemented in Avatica.
--- End diff --

Please open a ticket for this, so we make changes once Avatica supports 
Java 8, add TODOs instead of comments for easy tracking 
([example](https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/profile/FragmentWrapper.java#L103)).


> JDBC unit test fail on Java 8
> -
>
> Key: DRILL-5091
> URL: https://issues.apache.org/jira/browse/DRILL-5091
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
> Environment: Java 8
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>  Labels: ready-to-commit
>
> Run the {{TestJDBCQuery}} unit tests. They will fail with errors relating to 
> the default name space.
> The problem is due to a failure (that is ignored, DRILL-5090) to set up the 
> test DFS name space.
> The "dfs_test" storage plugin is not found in the plugin registry, resulting 
> in a null object and NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736360#comment-15736360
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91796042
  
--- Diff: 
exec/jdbc/src/test/java/org/apache/drill/jdbc/test/JdbcConnectTriesTestEmbeddedBits.java
 ---
@@ -0,0 +1,162 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.jdbc.test;
+
+import org.apache.drill.exec.rpc.InvalidConnectionInfoException;
+import org.apache.drill.exec.rpc.RpcException;
+import org.apache.drill.jdbc.Driver;
+import org.apache.drill.jdbc.JdbcTestBase;
+
+import org.junit.Test;
+
+import java.sql.SQLException;
+import java.sql.Connection;
+
+import java.util.concurrent.ExecutionException;
+
+import static junit.framework.Assert.assertNotNull;
+import static junit.framework.TestCase.fail;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+public class JdbcConnectTriesTestEmbeddedBits extends JdbcTestBase {
+
+  @Test
+  public void testDirectConnectionConnectTriesEqualsDrillbitCount() throws 
SQLException {
+Connection connection = null;
+try {
+  connection = new 
Driver().connect("jdbc:drill:drillbit=127.0.0.1:5000,127.0.0.1:5001;" + 
"tries=2",
--- End diff --

Create one driver and use across tests.


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736362#comment-15736362
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91768005
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/client/DrillClient.java ---
@@ -357,10 +357,54 @@ protected void afterExecute(final Runnable r, final 
Throwable t) {
 super.afterExecute(r, t);
   }
 };
-client = new UserClient(clientName, config, supportComplexTypes, 
allocator, eventLoopGroup, executor);
-logger.debug("Connecting to server {}:{}", endpoint.getAddress(), 
endpoint.getUserPort());
-connect(endpoint);
-connected = true;
+
+// "tries" is max number of unique drillbit to try connecting until 
successfully connected to one of them
+final String connectTriesConf = (props != null) ? 
props.getProperty("tries", "5") : "5";
+
+int connectTriesVal;
+try {
+  connectTriesVal = Math.min(endpoints.size(), 
Integer.parseInt(connectTriesConf));
+} catch (NumberFormatException e) {
+  throw new InvalidConnectionInfoException("Invalid tries value: " + 
connectTriesConf + " specified in " +
+   "connection string");
+}
+
+// If the value provided in the connection string is <=0 then override 
with 1 since we want to try connecting
+// at least once
+connectTriesVal = Math.max(1, connectTriesVal);
+
+int triedEndpointIndex = 0;
+DrillbitEndpoint endpoint;
+
+while (triedEndpointIndex < connectTriesVal) {
+  client = new UserClient(clientName, config, supportComplexTypes, 
allocator, eventLoopGroup, executor);
+  endpoint = endpoints.get(triedEndpointIndex);
+  logger.debug("Connecting to server {}:{}", endpoint.getAddress(), 
endpoint.getUserPort());
+
+  try {
+connect(endpoint);
+connected = true;
+logger.info("Successfully connected to server {}:{}", 
endpoint.getAddress(), endpoint.getUserPort());
+break;
+  } catch (InvalidConnectionInfoException ex) {
+logger.error("Connection to {}:{} failed with error {}. Not 
retrying anymore", endpoint.getAddress(),
+ endpoint.getUserPort(), ex.getMessage());
+throw ex;
+  } catch (RpcException ex) {
+++triedEndpointIndex;
+logger.error("Attempt {}: Failed to connect to server {}:{}", 
triedEndpointIndex, endpoint.getAddress(),
+ endpoint.getUserPort());
+
+// Close the connection
+if (client.isActive()) {
--- End diff --

Shouldn't the client be closed regardless of being active? There maybe 
other resources to close.


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified in the connection string may die and the client can fail 
> to connect to Foreman node if random selection happens to be of dead Drillbit.
> Even if ZooKeeper is used for selecting a random Drillbit from the registered 
> one there is a small window when client selects one Drillbit and then that 
> Drillbit went down. The client will fail to connect to this Drillbit and 
> error out. 
> Instead if we try multiple Drillbits (configurable tries count through 
> connection string) then the probability of hitting this error window will 
> reduce in both the cases improving fault tolerance. During further 
> investigation it was also found that if there is Authentication failure then 
> we throw that error as generic RpcException. We need to improve that as well 
> to capture this case explicitly since in case of Auth failure we don't want 
> to try multiple Drillbits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5098) Improving fault tolerance for connection between client and foreman node.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736361#comment-15736361
 ] 

ASF GitHub Bot commented on DRILL-5098:
---

Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/679#discussion_r91795818
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/client/ConnectTriesPropertyTestClusterBits.java
 ---
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.client;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.ExecutionException;
+
+import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.exec.ZookeeperHelper;
+import org.apache.drill.exec.coord.ClusterCoordinator;
+import org.apache.drill.exec.exception.DrillbitStartupException;
+import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.rpc.InvalidConnectionInfoException;
+import org.apache.drill.exec.rpc.RpcException;
+import org.apache.drill.exec.server.Drillbit;
+
+import org.apache.drill.exec.server.RemoteServiceSet;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static junit.framework.TestCase.assertTrue;
+import static junit.framework.TestCase.fail;
+
+public class ConnectTriesPropertyTestClusterBits {
+
+  public static StringBuilder bitInfo;
+  public static final String fakeBitsInfo = 
"127.0.0.1:5000,127.0.0.1:5001";
+  public static List drillbits;
+  public static final int drillBitCount = 1;
+  public static ZookeeperHelper zkHelper;
+  public static RemoteServiceSet remoteServiceSet;
+  public static DrillConfig drillConfig;
+
+  @BeforeClass
+  public static void testSetUp() throws Exception {
+remoteServiceSet = RemoteServiceSet.getLocalServiceSet();
+zkHelper = new ZookeeperHelper();
+zkHelper.startZookeeper(1);
+
+// Creating Drillbits
+drillConfig = zkHelper.getConfig();
+try {
+  int drillBitStarted = 0;
+  drillbits = new ArrayList<>();
+  while(drillBitStarted < drillBitCount){
+drillbits.add(Drillbit.start(drillConfig, remoteServiceSet));
+++drillBitStarted;
+  }
+} catch (DrillbitStartupException e) {
+  throw new RuntimeException("Failed to start drillbits.", e);
+}
+bitInfo = new StringBuilder();
+
+for (int i = 0; i < drillBitCount; ++i) {
+  final DrillbitEndpoint currentEndPoint = 
drillbits.get(i).getContext().getEndpoint();
+  final String currentBitIp = currentEndPoint.getAddress();
+  final int currentBitPort = currentEndPoint.getUserPort();
+  bitInfo.append(",");
+  bitInfo.append(currentBitIp);
+  bitInfo.append(":");
+  bitInfo.append(currentBitPort);
+}
+  }
+
+  @AfterClass
+  public static void testCleanUp(){
+for(int i=0; i < drillBitCount; ++i){
+  drillbits.get(i).close();
--- End diff --

+ spacing in the signature
+ Use `AutoCloseables.close(drillbits);`


> Improving fault tolerance for connection between client and foreman node.
> -
>
> Key: DRILL-5098
> URL: https://issues.apache.org/jira/browse/DRILL-5098
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Client - JDBC
>Reporter: Sorabh Hamirwasia
>Assignee: Sorabh Hamirwasia
>  Labels: doc-impacting, ready-to-commit
> Fix For: 1.10
>
>
> With DRILL-5015 we allowed support for specifying multiple Drillbits in 
> connection string and randomly choosing one out of it. Over time some of the 
> Drillbits specified

[jira] [Commented] (DRILL-4931) Attempting to execute a SELECT against an HBase store results in an IllegalAccessError accessing method "com.google.common.base.Stopwatch.()"

2016-12-09 Thread Matt Keranen (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736333#comment-15736333
 ] 

Matt Keranen commented on DRILL-4931:
-

Same issue in Drill 1.9. As [~stack] mentions above, using the Guava 16 jar 
seems to work: https://github.com/google/guava/wiki/Release16

Guava 17 jar results in same error.

> Attempting to execute a SELECT against an HBase store results in an 
> IllegalAccessError accessing method 
> "com.google.common.base.Stopwatch.()"
> ---
>
> Key: DRILL-4931
> URL: https://issues.apache.org/jira/browse/DRILL-4931
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - HBase
>Affects Versions: 1.8.0
>Reporter: T.C. Hydock
>
> I was attempting to follow the "Querying HBase Data" tutorial 
> (https://drill.apache.org/docs/querying-hbase/) against one of our HBase 
> instances and ran into the following error when trying to issue the "SELECT * 
> FROM students;" statement cited in Step #2 of the "Query HBase Tables" 
> section:
> {noformat}
> Error: SYSTEM ERROR: IllegalAccessError: tried to access method 
> com.google.common.base.Stopwatch.()V from class 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> {noformat}
> After doing some research it appears to be a conflict with instantiating the 
> Stopwatch class from the Guava JAR.  I was able to resolve this by swapping 
> out the packaged version of Guava (v18) with an older version (v16).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5108) Reduce output from Maven git-commit-id-plugin

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736343#comment-15736343
 ] 

ASF GitHub Bot commented on DRILL-5108:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/680
  
What does the non verbose message look like?


> Reduce output from Maven git-commit-id-plugin
> -
>
> Key: DRILL-5108
> URL: https://issues.apache.org/jira/browse/DRILL-5108
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Tools, Build & Test
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> The git-commit-id-plugin grabs information from Git to display during a 
> build. It prints many e-mail addresses and other generic project information. 
> As part of the effort to trim down unit test output, we propose to turn off 
> the verbose output from this plugin.
> Specific change:
> {code}
>   
> pl.project13.maven
> git-commit-id-plugin
> ...
> 
>  false
> {code}
> That is, change the verbose setting from true to false.
> In the unlikely event that some build process depends on the verbose output, 
> we can make the setting a configurable parameter, defaulting to false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5113) Upgrade Maven RAT plugin to avoid annoying XML errors

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736332#comment-15736332
 ] 

ASF GitHub Bot commented on DRILL-5113:
---

Github user sudheeshkatkam commented on the issue:

https://github.com/apache/drill/pull/682
  
+1


> Upgrade Maven RAT plugin to avoid annoying XML errors
> -
>
> Key: DRILL-5113
> URL: https://issues.apache.org/jira/browse/DRILL-5113
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> Build Drill with most Maven logging turned off. On every sub-project you will 
> see the following:
> {code}
> Compiler warnings:
>   WARNING:  'org.apache.xerces.jaxp.SAXParserImpl: Property 
> 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.'
> [INFO] Starting audit...
> Audit done.
> {code}
> The warning is a known issue with Java: 
> http://bugs.java.com/view_bug.do?bug_id=8016153
> The RAT folks seem to have done a patch: version 0.12 of the plugin no longer 
> has the warning. Upgrade Drill's {{pom.xml}} file to use this version instead 
> of the anonymous version currently used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4764) Parquet file with INT_16, etc. logical types not supported by simple SELECT

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735787#comment-15735787
 ] 

ASF GitHub Bot commented on DRILL-4764:
---

Github user Serhii-Harnyk commented on a diff in the pull request:

https://github.com/apache/drill/pull/673#discussion_r91753304
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/ParquetFixedWidthDictionaryReaders.java
 ---
@@ -56,6 +58,31 @@ protected void readField(long recordsToReadInThisPass) {
 }
--- End diff --

done


> Parquet file with INT_16, etc. logical types not supported by simple SELECT
> ---
>
> Key: DRILL-4764
> URL: https://issues.apache.org/jira/browse/DRILL-4764
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Data Types
>Affects Versions: 1.6.0
>Reporter: Paul Rogers
>Assignee: Serhii Harnyk
> Attachments: int_16.parquet, int_8.parquet, uint_16.parquet, 
> uint_32.parquet, uint_8.parquet
>
>
> Create a Parquet file with the following schema:
> message int16Data { required int32 index; required int32 value (INT_16); }
> Store it as int_16.parquet in the local file system. Query it with:
> SELECT * from `local`.`root`.`int_16.parquet`;
> The result, in the web UI, is this error:
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> UnsupportedOperationException: unsupported type: INT32 INT_16 Fragment 0:0 
> [Error Id: c63f66b4-e5a9-4a35-9ceb-546b74645dd4 on 172.30.1.28:31010]
> The INT_16 logical (or "original") type simply tells consumers of the file 
> that the data is actually a 16-bit signed int. Presumably, this should tell 
> Drill to use the SmallIntVector (or NullableSmallIntVector) class for 
> storage. Without supporting this annotation, even 16-bit integers must be 
> stored as 32-bits within Drill.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5117) Compile error when query a json file with 1000+columns

2016-12-09 Thread Zelaine Fong (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zelaine Fong updated DRILL-5117:

 Reviewer: Jinfeng Ni
Fix Version/s: (was: Future)

Assigned [~jni] as code reviewer.

> Compile error when query a json file with 1000+columns
> --
>
> Key: DRILL-5117
> URL: https://issues.apache.org/jira/browse/DRILL-5117
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Codegen
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
>
> Query failed with compile error when we querying a json file with 
> 1000+columns:
> {noformat}
> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/tooManyFields.json` limit 1;
> Error: SYSTEM ERROR: JaninoRuntimeException: Code attribute in class 
> "org.apache.drill.exec.test.generated.CopierGen0" grows beyond 64 KB
> Fragment 0:0
> [Error Id: a1306543-4d66-4bb0-b687-5802002833b2 on user515050-pc:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from sqlline.log:
> {noformat}
> 2016-12-09 13:43:38,207 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 27b54af4-b41f-0682-e50d-626de4eff68e: select * from 
> dfs.`/tmp/tooManyFields.json` limit 1
> 2016-12-09 13:43:38,340 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,340 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,532 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,547 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Failure finding Drillbit running on host 
> localhost.  Skipping affinity to that host.
> 2016-12-09 13:43:38,548 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Time: 13ms total, 13.922965ms avg, 13ms max.
> 2016-12-09 13:43:38,548 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Earliest start: 6.956000 μs, Latest start: 6.956000 μs, 
> Average start: 6.956000 μs .
> 2016-12-09 13:43:38,750 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2016-12-09 13:43:38,761 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State to report: RUNNING
> 2016-12-09 13:43:39,375 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] WARN  
> o.a.d.exec.compile.JDKClassCompiler - JDK Java compiler not available - 
> probably you're running Drill with a JRE and not a JDK
> 2016-12-09 13:43:40,533 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State change requested RUNNING --> 
> FAILED
> 2016-12-09 13:43:40,550 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State change requested FAILED --> 
> FINISHED
> 2016-12-09 13:43:40,552 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] ERROR 
> o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: JaninoRuntimeException: 
> Code attribute in class "org.apache.drill.exec.test.generated.CopierGen0" 
> grows beyond 64 KB
> Fragment 0:0
> [Error Id: a1306543-4d66-4bb0-b687-5802002833b2 on user515050-pc:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> JaninoRuntimeException: Code attribute in class 
> "org.apache.drill.exec.test.generated.CopierGen0" grows beyond 64 KB
> Fragment 0:0
> [Error Id: a1306543-4d66-4bb0-b687-5802

[jira] [Commented] (DRILL-5118) Select Query Limit is not working in Drill 1.8

2016-12-09 Thread Zelaine Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735744#comment-15735744
 ] 

Zelaine Fong commented on DRILL-5118:
-

Not sure if this is related to DRILL-4905.

> Select Query Limit is not working in Drill 1.8
> --
>
> Key: DRILL-5118
> URL: https://issues.apache.org/jira/browse/DRILL-5118
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Client - CLI, Client - HTTP
>Affects Versions: 1.8.0
>Reporter: Gopal Nagar
>
> Hi All,
> Drill 1.8.0 has been installed on a AWS node which has 32 GB RAM and 80 GB 
> storage. I didn't specify memory separately to Drill. I am trying to join two 
> tables have rows 4607818 & 14273378 respectively and I have put limit 100 in 
> query.
> But after displaying the 100 rows on Drill CLI, Query doesn't terminated and 
> it doesn't go back to CLI prompt and keep processing the data in Background. 
> Please help.
> Join Query 
> --
> select t1.col FROM hive.table1 as t1 join hive.table2 as t2 on t1.col = 
> t2.col limit 100;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4842) SELECT * on JSON data results in NumberFormatException

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735704#comment-15735704
 ] 

ASF GitHub Bot commented on DRILL-4842:
---

Github user Serhii-Harnyk commented on the issue:

https://github.com/apache/drill/pull/594
  
@chunhui-shi, could you please review new changes?


> SELECT * on JSON data results in NumberFormatException
> --
>
> Key: DRILL-4842
> URL: https://issues.apache.org/jira/browse/DRILL-4842
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Affects Versions: 1.2.0
>Reporter: Khurram Faraaz
>Assignee: Serhii Harnyk
> Attachments: tooManyNulls.json
>
>
> Note that doing SELECT c1 returns correct results, the failure is seen when 
> we do SELECT star. json.all_text_mode was set to true.
> JSON file tooManyNulls.json has one key c1 with 4096 nulls as its value and 
> the 4097th key c1 has the value "Hello World"
> git commit ID : aaf220ff
> MapR Drill 1.8.0 RPM
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> alter session set 
> `store.json.all_text_mode`=true;
> +---++
> |  ok   |  summary   |
> +---++
> | true  | store.json.all_text_mode updated.  |
> +---++
> 1 row selected (0.27 seconds)
> 0: jdbc:drill:schema=dfs.tmp> SELECT c1 FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> +--+
> |  c1  |
> +--+
> | Hello World  |
> +--+
> 1 row selected (0.243 seconds)
> 0: jdbc:drill:schema=dfs.tmp> select * FROM `tooManyNulls.json` WHERE c1 IN 
> ('Hello World');
> Error: SYSTEM ERROR: NumberFormatException: Hello World
> Fragment 0:0
> [Error Id: 9cafb3f9-3d5c-478a-b55c-900602b8765e on centos-01.qa.lab:31010]
>  (java.lang.NumberFormatException) Hello World
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI():95
> 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.varTypesToInt():120
> org.apache.drill.exec.test.generated.FiltererGen1169.doSetup():45
> org.apache.drill.exec.test.generated.FiltererGen1169.setup():54
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.generateSV2Filterer():195
> 
> org.apache.drill.exec.physical.impl.filter.FilterRecordBatch.setupNewSchema():107
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():78
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():94
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.record.AbstractRecordBatch.next():119
> org.apache.drill.exec.record.AbstractRecordBatch.next():109
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():135
> org.apache.drill.exec.record.AbstractRecordBatch.next():162
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
> java.security.AccessController.doPrivileged():-2
> javax.security.auth.Subject.doAs():415
> org.apache.hadoop.security.UserGroupInformation.doAs():1595
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
> org.apache.drill.common.SelfCleaningRunnable.run():38
> java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> java.lang.Thread.run():745 (state=,code=0)
> 0: jdbc:drill:schema=dfs.tmp>
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.NumberFormatException: Hello World
> at 
> org.apache.drill.exec.expr.fn.impl.StringFunctionHelpers.nfeI(StringFunctionHelpers.java:95)
>  ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at 
> org.apache.drill.exec.exp

[jira] [Commented] (DRILL-5117) Compile error when query a json file with 1000+columns

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15735691#comment-15735691
 ] 

ASF GitHub Bot commented on DRILL-5117:
---

GitHub user Serhii-Harnyk opened a pull request:

https://github.com/apache/drill/pull/686

DRILL-5117: Compile error when query a json file with 1000+columns



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Serhii-Harnyk/drill DRILL-5117

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/drill/pull/686.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #686


commit 00eaf30fd662530d8bd62059b85b0ad179768fdb
Author: Serhii-Harnyk 
Date:   2016-12-08T20:08:34Z

DRILL-5117: Compile error when query a json file with 1000+columns




> Compile error when query a json file with 1000+columns
> --
>
> Key: DRILL-5117
> URL: https://issues.apache.org/jira/browse/DRILL-5117
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Codegen
>Affects Versions: 1.8.0
>Reporter: Serhii Harnyk
>Assignee: Serhii Harnyk
> Fix For: Future
>
>
> Query failed with compile error when we querying a json file with 
> 1000+columns:
> {noformat}
> 0: jdbc:drill:zk=local> select * from dfs.`/tmp/tooManyFields.json` limit 1;
> Error: SYSTEM ERROR: JaninoRuntimeException: Code attribute in class 
> "org.apache.drill.exec.test.generated.CopierGen0" grows beyond 64 KB
> Fragment 0:0
> [Error Id: a1306543-4d66-4bb0-b687-5802002833b2 on user515050-pc:31010] 
> (state=,code=0)
> {noformat}
> Stack trace from sqlline.log:
> {noformat}
> 2016-12-09 13:43:38,207 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 27b54af4-b41f-0682-e50d-626de4eff68e: select * from 
> dfs.`/tmp/tooManyFields.json` limit 1
> 2016-12-09 13:43:38,340 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,340 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,532 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-12-09 13:43:38,547 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Failure finding Drillbit running on host 
> localhost.  Skipping affinity to that host.
> 2016-12-09 13:43:38,548 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Time: 13ms total, 13.922965ms avg, 13ms max.
> 2016-12-09 13:43:38,548 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Earliest start: 6.956000 μs, Latest start: 6.956000 μs, 
> Average start: 6.956000 μs .
> 2016-12-09 13:43:38,750 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2016-12-09 13:43:38,761 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State to report: RUNNING
> 2016-12-09 13:43:39,375 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] WARN  
> o.a.d.exec.compile.JDKClassCompiler - JDK Java compiler not available - 
> probably you're running Drill with a JRE and not a JDK
> 2016-12-09 13:43:40,533 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: State change requested RUNNING --> 
> FAILED
> 2016-12-09 13:43:40,550 [27b54af4-b41f-0682-e5

[jira] [Created] (DRILL-5118) Select Query Limit is not working in Drill 1.8

2016-12-09 Thread Gopal Nagar (JIRA)
Gopal Nagar created DRILL-5118:
--

 Summary: Select Query Limit is not working in Drill 1.8
 Key: DRILL-5118
 URL: https://issues.apache.org/jira/browse/DRILL-5118
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - CLI, Client - HTTP
Affects Versions: 1.8.0
Reporter: Gopal Nagar


Hi All,

Drill 1.8.0 has been installed on a AWS node which has 32 GB RAM and 80 GB 
storage. I didn't specify memory separately to Drill. I am trying to join two 
tables have rows 4607818 & 14273378 respectively and I have put limit 100 in 
query.

But after displaying the 100 rows on Drill CLI, Query doesn't terminated and it 
doesn't go back to CLI prompt and keep processing the data in Background. 
Please help.

Join Query 
--
select t1.col FROM hive.table1 as t1 join hive.table2 as t2 on t1.col = t2.col 
limit 100;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DRILL-5117) Compile error when query a json file with 1000+columns

2016-12-09 Thread Serhii Harnyk (JIRA)
Serhii Harnyk created DRILL-5117:


 Summary: Compile error when query a json file with 1000+columns
 Key: DRILL-5117
 URL: https://issues.apache.org/jira/browse/DRILL-5117
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Codegen
Affects Versions: 1.8.0
Reporter: Serhii Harnyk
Assignee: Serhii Harnyk
 Fix For: Future


Query failed with compile error when we querying a json file with 1000+columns:
{noformat}
0: jdbc:drill:zk=local> select * from dfs.`/tmp/tooManyFields.json` limit 1;
Error: SYSTEM ERROR: JaninoRuntimeException: Code attribute in class 
"org.apache.drill.exec.test.generated.CopierGen0" grows beyond 64 KB

Fragment 0:0

[Error Id: a1306543-4d66-4bb0-b687-5802002833b2 on user515050-pc:31010] 
(state=,code=0)
{noformat}

Stack trace from sqlline.log:
{noformat}
2016-12-09 13:43:38,207 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
27b54af4-b41f-0682-e50d-626de4eff68e: select * from 
dfs.`/tmp/tooManyFields.json` limit 1
2016-12-09 13:43:38,340 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,340 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,341 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,532 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-12-09 13:43:38,547 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Failure finding Drillbit running on host 
localhost.  Skipping affinity to that host.
2016-12-09 13:43:38,548 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 using 
1 threads. Time: 13ms total, 13.922965ms avg, 13ms max.
2016-12-09 13:43:38,548 [27b54af4-b41f-0682-e50d-626de4eff68e:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 using 
1 threads. Earliest start: 6.956000 μs, Latest start: 6.956000 μs, Average 
start: 6.956000 μs .
2016-12-09 13:43:38,750 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: 
State change requested AWAITING_ALLOCATION --> RUNNING
2016-12-09 13:43:38,761 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: 
State to report: RUNNING
2016-12-09 13:43:39,375 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] WARN  
o.a.d.exec.compile.JDKClassCompiler - JDK Java compiler not available - 
probably you're running Drill with a JRE and not a JDK
2016-12-09 13:43:40,533 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: 
State change requested RUNNING --> FAILED
2016-12-09 13:43:40,550 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27b54af4-b41f-0682-e50d-626de4eff68e:0:0: 
State change requested FAILED --> FINISHED
2016-12-09 13:43:40,552 [27b54af4-b41f-0682-e50d-626de4eff68e:frag:0:0] ERROR 
o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: JaninoRuntimeException: 
Code attribute in class "org.apache.drill.exec.test.generated.CopierGen0" grows 
beyond 64 KB

Fragment 0:0

[Error Id: a1306543-4d66-4bb0-b687-5802002833b2 on user515050-pc:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
JaninoRuntimeException: Code attribute in class 
"org.apache.drill.exec.test.generated.CopierGen0" grows beyond 64 KB

Fragment 0:0

[Error Id: a1306543-4d66-4bb0-b687-5802002833b2 on user515050-pc:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.8.0.jar:1.8.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
 [drill-java-exec-1.8.0.jar:1.8.0]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExec

[jira] [Closed] (DRILL-4941) UnsupportedOperationException : CASE WHEN true or null then 1 else 0 end

2016-12-09 Thread Serhii Harnyk (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serhii Harnyk closed DRILL-4941.

Resolution: Won't Fix

> UnsupportedOperationException : CASE WHEN true or null then 1 else 0 end
> 
>
> Key: DRILL-4941
> URL: https://issues.apache.org/jira/browse/DRILL-4941
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Khurram Faraaz
>Assignee: Serhii Harnyk
>
> Below case expression results in UnsupportedOperationException on Drill 1.9.0 
> git commit ID: 4edabe7a
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> SELECT (CASE WHEN true or null then 1 else 0 
> end) from (VALUES(1));
> Error: VALIDATION ERROR: class org.apache.calcite.sql.SqlLiteral: NULL
> SQL Query null
> [Error Id: 822ec7b0-3630-478c-b82a-0acedc39a560 on centos-01.qa.lab:31010] 
> (state=,code=0)
> -- changing null to "not null" in the search condition causes Drill to return 
> results
> 0: jdbc:drill:schema=dfs.tmp> SELECT (CASE WHEN true or not null then 1 else 
> 0 end) from (VALUES(1));
> +-+
> | EXPR$0  |
> +-+
> | 1   |
> +-+
> 1 row selected (0.11 seconds)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> Caused by: java.lang.UnsupportedOperationException: class 
> org.apache.calcite.sql.SqlLiteral: NULL
> at org.apache.calcite.util.Util.needToImplement(Util.java:920) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.getValidatedNodeType(SqlValidatorImpl.java:1426)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlBinaryOperator.adjustType(SqlBinaryOperator.java:103)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:511) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlBinaryOperator.deriveType(SqlBinaryOperator.java:143)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.fun.SqlCaseOperator.checkOperandTypes(SqlCaseOperator.java:178)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:430) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.fun.SqlCaseOperator.deriveType(SqlCaseOperator.java:164)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4337)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:4324)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:130) 
> ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1501)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1484)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:446)
>  ~[calcite-core-1.4.0-drill-r18.jar:1.4.0-drill-r18]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5043) Function that returns a unique id per session/connection similar to MySQL's CONNECTION_ID()

2016-12-09 Thread Khurram Faraaz (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15734779#comment-15734779
 ] 

Khurram Faraaz commented on DRILL-5043:
---

[~nagarajanchinnasamy] you may want to take a look at DRILL-4956, that feature 
uses a unique session ID to identify temporary tables in Drill, you should look 
at how unique session IDs are implemented for temporary tables there.

> Function that returns a unique id per session/connection similar to MySQL's 
> CONNECTION_ID()
> ---
>
> Key: DRILL-5043
> URL: https://issues.apache.org/jira/browse/DRILL-5043
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Functions - Drill
>Affects Versions: 1.8.0
>Reporter: Nagarajan Chinnasamy
>Priority: Minor
>  Labels: CONNECTION_ID, SESSION, UDF
> Attachments: 01_session_id_sqlline.png, 
> 02_session_id_webconsole_query.png, 03_session_id_webconsole_result.png
>
>
> Design and implement a function that returns a unique id per 
> session/connection similar to MySQL's CONNECTION_ID().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DRILL-5052) Option to debug generated Java code using an IDE

2016-12-09 Thread Arina Ielchiieva (JIRA)

 [ 
https://issues.apache.org/jira/browse/DRILL-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-5052:

Labels: ready-to-commit  (was: )

> Option to debug generated Java code using an IDE
> 
>
> Key: DRILL-5052
> URL: https://issues.apache.org/jira/browse/DRILL-5052
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Codegen
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>  Labels: ready-to-commit
>
> Drill makes extensive use of Java code generation to implement its operators. 
> Drill uses sophisticated techniques to blend generated code with pre-compiled 
> template code. An unfortunate side-effect of this behavior is that it is very 
> difficult to visualize and debug the generated code.
> As it turns out, Drill's code-merge facility is, in essence, a do-it-yourself 
> version of subclassing. The Drill "template" is the parent class, the 
> generated code is the subclass. But, rather than using plain-old subclassing, 
> Drill combines the code from the two classes into a single "artificial" 
> packet of byte codes for which no source exists.
> Modify the code generation path to optionally allow "plain-old Java" 
> compilation: the generated code is a subclass of the template. Compile the 
> generated code as a plain-old Java class with no byte-code fix-up. Write the 
> code to a known location that the IDE can search when looking for source 
> files.
> With this change, developers can turn on the above feature, set a breakpoint 
> in a template, then step directly into the generated Java code called from 
> the template.
> This feature should be an option, enabled by developers when needed. The 
> existing byte-code technique should be used for production code generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5052) Option to debug generated Java code using an IDE

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15734766#comment-15734766
 ] 

ASF GitHub Bot commented on DRILL-5052:
---

Github user arina-ielchiieva commented on the issue:

https://github.com/apache/drill/pull/660
  
Looks good.


> Option to debug generated Java code using an IDE
> 
>
> Key: DRILL-5052
> URL: https://issues.apache.org/jira/browse/DRILL-5052
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Codegen
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> Drill makes extensive use of Java code generation to implement its operators. 
> Drill uses sophisticated techniques to blend generated code with pre-compiled 
> template code. An unfortunate side-effect of this behavior is that it is very 
> difficult to visualize and debug the generated code.
> As it turns out, Drill's code-merge facility is, in essence, a do-it-yourself 
> version of subclassing. The Drill "template" is the parent class, the 
> generated code is the subclass. But, rather than using plain-old subclassing, 
> Drill combines the code from the two classes into a single "artificial" 
> packet of byte codes for which no source exists.
> Modify the code generation path to optionally allow "plain-old Java" 
> compilation: the generated code is a subclass of the template. Compile the 
> generated code as a plain-old Java class with no byte-code fix-up. Write the 
> code to a known location that the IDE can search when looking for source 
> files.
> With this change, developers can turn on the above feature, set a breakpoint 
> in a template, then step directly into the generated Java code called from 
> the template.
> This feature should be an option, enabled by developers when needed. The 
> existing byte-code technique should be used for production code generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-5052) Option to debug generated Java code using an IDE

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15734760#comment-15734760
 ] 

ASF GitHub Bot commented on DRILL-5052:
---

Github user arina-ielchiieva commented on a diff in the pull request:

https://github.com/apache/drill/pull/660#discussion_r91679324
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/ClassGenerator.java ---
@@ -246,6 +246,12 @@ public void rotateBlock() {
 rotateBlock(BlkCreateMode.TRUE);
   }
 
+  /**
+   * Create a new code block, closing the current block.
+   *
+   * @param mode
+   */
--- End diff --

IntelliJ, it's more strict :)


> Option to debug generated Java code using an IDE
> 
>
> Key: DRILL-5052
> URL: https://issues.apache.org/jira/browse/DRILL-5052
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Execution - Codegen
>Affects Versions: 1.8.0
>Reporter: Paul Rogers
>Assignee: Paul Rogers
>Priority: Minor
>
> Drill makes extensive use of Java code generation to implement its operators. 
> Drill uses sophisticated techniques to blend generated code with pre-compiled 
> template code. An unfortunate side-effect of this behavior is that it is very 
> difficult to visualize and debug the generated code.
> As it turns out, Drill's code-merge facility is, in essence, a do-it-yourself 
> version of subclassing. The Drill "template" is the parent class, the 
> generated code is the subclass. But, rather than using plain-old subclassing, 
> Drill combines the code from the two classes into a single "artificial" 
> packet of byte codes for which no source exists.
> Modify the code generation path to optionally allow "plain-old Java" 
> compilation: the generated code is a subclass of the template. Compile the 
> generated code as a plain-old Java class with no byte-code fix-up. Write the 
> code to a known location that the IDE can search when looking for source 
> files.
> With this change, developers can turn on the above feature, set a breakpoint 
> in a template, then step directly into the generated Java code called from 
> the template.
> This feature should be an option, enabled by developers when needed. The 
> existing byte-code technique should be used for production code generation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)