[jira] [Created] (DRILL-5458) REPEATED_COUNT over varchar data results in SchemaChangeException

2017-05-01 Thread Khurram Faraaz (JIRA)
Khurram Faraaz created DRILL-5458:
-

 Summary: REPEATED_COUNT over varchar data results in 
SchemaChangeException
 Key: DRILL-5458
 URL: https://issues.apache.org/jira/browse/DRILL-5458
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.11.0
Reporter: Khurram Faraaz


REPEATED_COUNT over varchar type data results in SchemaChangeException
Apache Drill 1.11.0 commit id: 3e8b01d5

{noformat}
// CTAS over JSON file.
0: jdbc:drill:schema=dfs.tmp> create table tblwarr01 as select 
convert_to(t.arr,'JSON') arr, id from `rptd_count.json` t;
+---++
| Fragment  | Number of records written  |
+---++
| 0_0   | 10 |
+---++
1 row selected (0.221 seconds)

0: jdbc:drill:schema=dfs.tmp> select * from tblwarr01;
+--+-+
| arr  | id  |
+--+-+
| [B@67cf1438  | 1   |
| [B@4c389dc6  | 2   |
| [B@18fe5942  | 3   |
| [B@629608df  | 4   |
| [B@68209b09  | 5   |
| [B@34a2a147  | 6   |
| [B@210a5750  | 7   |
| [B@2dea5622  | 8   |
| [B@73bce9ba  | 9   |
| [B@7794edb2  | 10  |
+--+-+
10 rows selected (0.209 seconds)
{noformat}

{noformat}
0: jdbc:drill:schema=dfs.tmp> select convert_from(t.arr,'UTF8') from tblwarr01 
t;
+---+
|EXPR$0 |
+---+
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
| [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]  |
+---+
10 rows selected (0.195 seconds)

// Performing a REPEATED_COUNT on the array results in SchemaChangeException

0: jdbc:drill:schema=dfs.tmp> select REPEATED_COUNT(convert_from(t.arr,'UTF8')) 
from tblwarr01 t;
Error: SYSTEM ERROR: SchemaChangeException: Failure while trying to materialize 
incoming schema.  Errors:

Error in expression at index -1.  Error: Missing function implementation: 
[repeated_count(VARCHAR-REQUIRED)].  Full expression: --UNKNOWN EXPRESSION--..

Fragment 0:0

[Error Id: 1a891918-79f6-49d1-a594-f9ec5f28a99a on centos-01.qa.lab:31010] 
(state=,code=0)
{noformat}

Data used in CTAS.
{noformat}
[root@centos-01 ~]# cat rptd_count.json
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":1}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":2}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":3}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":4}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":5}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":6}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":7}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":8}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":9}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":10}
[root@centos-01 ~]#
{noformat}

Stack trace from drillbit.log

{noformat}
2017-05-02 05:17:03,651 [26f7e9b0-5f11-e764-6fa8-92c27ca2c6cf:frag:0:0] ERROR 
o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: SchemaChangeException: 
Failure while trying to materialize incoming schema.  Errors:

Error in expression at index -1.  Error: Missing function implementation: 
[repeated_count(VARCHAR-REQUIRED)].  Full expression: --UNKNOWN EXPRESSION--..

Fragment 0:0

[Error Id: 1a891918-79f6-49d1-a594-f9ec5f28a99a on centos-01.qa.lab:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
SchemaChangeException: Failure while trying to materialize incoming schema.  
Errors:

Error in expression at index -1.  Error: Missing function implementation: 
[repeated_count(VARCHAR-REQUIRED)].  Full expression: --UNKNOWN EXPRESSION--..

Fragment 0:0

[Error Id: 1a891918-79f6-49d1-a594-f9ec5f28a99a on centos-01.qa.lab:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
 ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:295)
 [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:264)
 [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_91]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_91]

[GitHub] drill pull request #821: DRILL-5450: Fix initcap function to convert upper c...

2017-05-01 Thread paul-rogers
Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/821#discussion_r114249334
  
--- Diff: 
exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestStringFunctions.java
 ---
@@ -308,4 +308,58 @@ public void testReverseLongVarChars() throws Exception 
{
   FileUtils.deleteQuietly(path);
 }
   }
+
+  @Test
+  public void testLower() throws Exception {
+testBuilder()
+.sqlQuery("select\n" +
+"lower('ABC') col_upper,\n" +
+"lower('abc') col_lower,\n" +
--- End diff --

Please add tests for Greek and Cyrillic. Our source encoding is UTF-8, so 
you can enter the characters directly. Or, if that does not work, you can 
instead use the Java Unicode encoding: U1234.

If the tests fail because of parsing of SQL, please file a bug. If they 
fail because the function above does not support UTF-8, please file a different 
bug.

In either case, you can then comment out the test cases and add a comment 
that says that they fail due to DRILL-, whatever your bug number turns out 
to be.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #821: DRILL-5450: Fix initcap function to convert upper c...

2017-05-01 Thread paul-rogers
Github user paul-rogers commented on a diff in the pull request:

https://github.com/apache/drill/pull/821#discussion_r114249116
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/StringFunctionHelpers.java
 ---
@@ -144,41 +144,28 @@ public static int varTypesToInt(final int start, 
final int end, DrillBuf buffer)
 return result;
   }
 
-  // Assumes Alpha as [A-Za-z0-9]
-  // white space is treated as everything else.
+  /**
+   * Capitalizes first letter in each word.
+   * Any symbol except digits and letters is considered as word delimiter.
+   *
+   * @param start start position in input buffer
+   * @param end end position in input buffer
+   * @param inBuf buffer with input characters
+   * @param outBuf buffer with output characters
+   */
   public static void initCap(int start, int end, DrillBuf inBuf, DrillBuf 
outBuf) {
-boolean capNext = true;
+boolean capitalizeNext = true;
 int out = 0;
 for (int id = start; id < end; id++, out++) {
-  byte currentByte = inBuf.getByte(id);
-
-  // 'A - Z' : 0x41 - 0x5A
-  // 'a - z' : 0x61 - 0x7A
-  // '0-9' : 0x30 - 0x39
-  if (capNext) { // curCh is whitespace or first character of word.
-if (currentByte >= 0x30 && currentByte <= 0x39) { // 0-9
-  capNext = false;
-} else if (currentByte >= 0x41 && currentByte <= 0x5A) { // A-Z
-  capNext = false;
-} else if (currentByte >= 0x61 && currentByte <= 0x7A) { // a-z
-  capNext = false;
-  currentByte -= 0x20; // Uppercase this character
-}
-// else {} whitespace
-  } else { // Inside of a word or white space after end of word.
-if (currentByte >= 0x30 && currentByte <= 0x39) { // 0-9
-  // noop
-} else if (currentByte >= 0x41 && currentByte <= 0x5A) { // A-Z
-  currentByte -= 0x20; // Lowercase this character
-} else if (currentByte >= 0x61 && currentByte <= 0x7A) { // a-z
-  // noop
-} else { // whitespace
-  capNext = true;
-}
+  int currentByte = inBuf.getByte(id);
--- End diff --

This code works only for ASCII, but not for UTF-8. UTF-8 is a multi-byte 
code that requires special encoding/decoding to convert to Unicode characters. 
Without that encoding, this method won't work for Cyrillic, Greek or any other 
character set with upper/lower distinctions.

Since this method never worked, it is probably OK to make it a bit less 
broken than before: at least now it works for ASCII. Please add unit tests 
below, then file a JIRA, for the fact that this function does not work with 
UTF-8 despite the fact that Drill claims it supports UTF-8.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #822: DRILL-5457: Spill implementation for Hash Aggregate

2017-05-01 Thread Ben-Zvi
GitHub user Ben-Zvi opened a pull request:

https://github.com/apache/drill/pull/822

DRILL-5457: Spill implementation for Hash Aggregate

This pull-request is for the work on enabling memory spill for the Hash 
Aggregate Operator.

To assist in reviewing this extensive code change, listed below are various 
topics/issues that describe the implementation decisions and give some code 
pointers. The reviewer can read these items and peek into their relevant code, 
or read all the items first (and comment on the design decisions as well).

The last topic is not "least": It describes many issues and solutions 
related to the need to estimate the memory size of batches (and hash tables, 
etc.) This work took a significant amount of time, and will need some more to 
get better.

(Most of the code changes are in HashAggTemplate.java, hence this file is 
not mentioned specifically below)

### Aggregation phase:
   The code was changed to pass the aggregation phase information (whether 
this is a Single phase, or 1st of two phase, or 2nd of two phase) from the 
planner to the HAG operator code.
(See HashAggregate.java, AggPrelBase.java, HashAggPrel.java )

### Partitioning:
  The data (rows/groups) coming into the HAG is partitioned into (a power 
of 2) number of partitions, based on the N least significant bits of the hash 
value (computed out of the row's key columns).
  Each partition can be handled independently of the others. Ideally each 
partition should fit into the available memory. The number of partitions is 
initialized from the option "drill.exec.hashagg.num_partitions", and scaled 
down if the available memory seems too small (each partition needs to hold at 
least one batch in memory).
  The scaling down uses the formula:  AVAIL_MEMORY >  NUM_PARTITIONS * ( K 
* EST_BATCH_SIZE + 8M )
(see delayedSetup() ) where K is the option 
drill.exec.hashagg.min_batches_per_partition -- see below).
  Computing the number of partitions is delayed till actuall data arrives 
on incoming (in order to get an accurate sizing on varchars). See 
delayedSetup(). There is also special code for cases data never arrives (empty 
batches) hence no partitions (see beginning of outputCurrentBatch(), cleanUp(), 
delayedSetup() ).
  Many of the code changes made in order to implement multi-partitions 
follow the original code, only changing scalar members (of HashAggTemplate) 
into arrays, like "htable" becomes "htables[]".
  Each partition has its own hash table. After each time it is spilled, its 
hash table is freed and reallocated.

### Hash Code:
  The hash code computation result was extracted from the HashTable (needed 
for the partitions), and added as a parameter to the put() method. Thus for 
each new incoming row, first the hash code is computed, and the low N bits are 
used to select the partition, then the hash code is right shifted by N, and the 
result is passed back to the put() method.
  After spilling, the hash codes are not kept. When reading the rows 
(groups) from the spill, the hash codes are computed again (and right shifted 
before use - once per each cycle of spilling - thus repartitioning).
(See more about "spilling cycle" below).

### Hash Table put():
  The put() method for the hash table was rewriten and simplified. In 
addition to the hash-code parameter change, it was changed to return the 
PutStatus, with two new states: NEW_BATCH_ADDED notifies the caller that a new 
batch was created internally, hence a new batch (only needed for Hash Agg) is 
needed (prior code was getting this from comparing the returned index against 
the prior number of batches).
  A second new state is KEY_ADDED_LAST, which notifies that a batch was 
just filled, hence it is time for checking memory availability (because a new 
batch would be allocated soon).
  Similar rewriting was done for the hash table containsKey() method (and 
addBatchifNeeded() ).

### Memory pressure check:
  Logically the place to check for a memory pressure is when a new memory 
is needed (i.e., when a new group needs to be created.) However the code 
structure does not let this easily (e.g., a new batch is allocated inside the 
hash table object when a new group is detected, or the hash table structure is 
doubled in size),  thus instead the check is done AFTER a new group was added, 
in case this was the last group added to that batch  (see in 
checkGroupAndAggrValues() - checking for a new status KEY_ADDED_LAST  )
   This memory availability check checks if there is enough memory left 
between the allocated so far and the limit.
 Spill is initiated when:  MEMORY_USED + MAX_MEMORY_NEEDED > MEMORY_LIMIT   
(see checkGroupAndAggrValues() )
 where the memory needed is:  (EST_BATCH_SIZE + 64K * (4+4)) * K * 
PLANNED_BATCHES + MAX_HASH_TABLES_RESIZE
(See K above, under Partitioning, and the rest 

[jira] [Created] (DRILL-5457) Support Spill to Disk for the Hash Aggregate Operator

2017-05-01 Thread Boaz Ben-Zvi (JIRA)
Boaz Ben-Zvi created DRILL-5457:
---

 Summary: Support Spill to Disk for the Hash Aggregate Operator
 Key: DRILL-5457
 URL: https://issues.apache.org/jira/browse/DRILL-5457
 Project: Apache Drill
  Issue Type: Improvement
  Components: Execution - Relational Operators
Affects Versions: 1.10.0
Reporter: Boaz Ben-Zvi
Assignee: Boaz Ben-Zvi
 Fix For: 1.11.0


Support gradual spilling memory to disk as the available memory gets too small 
to allow in memory work for the Hash Aggregate Operator.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114219484
  
--- Diff: protocol/src/main/protobuf/User.proto ---
@@ -124,6 +125,8 @@ message BitToUserHandshake {
   optional RpcEndpointInfos server_infos = 6;
   repeated string authenticationMechanisms = 7;
   repeated RpcType supported_methods = 8;
+  optional bool encrypted = 9;
+  optional int32 wrapChunkSize = 10;
--- End diff --

Change name to `maxWrappedSize`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114214071
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+  final byte[] wrappedMsg;
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  int wrappedMsgLength = 

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114212187
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/AuthenticationOutcomeListener.java
 ---
@@ -165,22 +172,23 @@ public void interrupted(InterruptedException e) {
  *
  * @param context challenge context
  * @return response
- * @throws Exception
+ * @throws Exception in case of any failure
  */
-SaslMessage process(SaslChallengeContext context) throws Exception;
+SaslMessage process(SaslChallengeContext context) throws Exception;
--- End diff --

Why not this signature?

` process(SaslChallengeContext context) 
throws Exception;`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114211502
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/AuthenticationOutcomeListener.java
 ---
@@ -243,4 +249,46 @@ public SaslMessage process(SaslChallengeContext 
context) throws Exception {
   }
 }
   }
+
+  private static void handleSuccess(SaslChallengeContext context) throws 
SaslException {
+final ClientConnection connection = context.connection;
+final SaslClient saslClient = connection.getSaslClient();
+
+try {
+  // Check if connection was marked for being secure then verify for 
negotiated QOP value for
+  // correctness.
+  final String negotiatedQOP = 
saslClient.getNegotiatedProperty(Sasl.QOP).toString();
--- End diff --

Hmm, I read that in [Sasl.QOP 
doc](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/javax/security/sasl/Sasl.java#85):

> ... If this property is absent, the default qop is "auth"

So the change is required?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114214085
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  final int wrappedMsgLength = lengthOctets.getInt(0);
+  

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114214449
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
--- End diff --

You could do this:

```java
final int numComponents = 

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114216967
  
--- Diff: exec/rpc/src/main/java/org/apache/drill/exec/rpc/RpcMetrics.java 
---
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import org.apache.drill.exec.memory.BufferAllocator;
+
+/**
+ * Holder interface for all the metrics used in RPC layer
+ */
+public interface RpcMetrics {
--- End diff --

-0 for this plumbing.

The end result is that only static state (counters) is being modified using 
instances, which is not recommended 
([ref](http://stackoverflow.com/questions/6832767/how-to-modify-private-static-variable-through-setter-method)).
 So the hierarchy is not required.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114213700
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/ChunkCreationHandler.java ---
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.rpc;
+
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.lang.Math.min;
+
+/**
+ * Handler that converts an input ByteBuf into chunk size ByteBuf's and 
add it to the
+ * CompositeByteBuf as individual components. If encryption is enabled, 
this is always
+ * added in the channel pipeline.
+ */
+class ChunkCreationHandler extends MessageToMessageEncoder {
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  ChunkCreationHandler.class.getCanonicalName());
+
+  private final int chunkSize;
+
+  ChunkCreationHandler(int chunkSize) {
+checkArgument(chunkSize > 0);
+this.chunkSize = chunkSize;
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.CHUNK_CREATION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.CHUNK_CREATION_HANDLER + " 
handler");
+  }
+
+  @Override
+  protected void encode(ChannelHandlerContext ctx, ByteBuf msg, 
List out) throws Exception {
+
+if (RpcConstants.EXTRA_DEBUGGING) {
+  logger.debug("ChunkCreationHandler called with msg {} of size {} 
with chunkSize {}",
+  msg, msg.readableBytes(), chunkSize);
+}
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("Channel closed, skipping encode inside {}.", 
RpcConstants.CHUNK_CREATION_HANDLER);
+  msg.release();
+  return;
+}
+
+ByteBuf chunkBuf;
+
+// Calculate the number of chunks based on configured chunk size and 
input msg size
+int numChunks = (int) Math.ceil((double) msg.readableBytes() / 
chunkSize);
+
+// Initialize a composite buffer to hold numChunks chunk.
+final CompositeByteBuf cbb = ctx.alloc().compositeBuffer(numChunks);
+
+int cbbWriteIndex = 0;
+int currentChunkLen = min(msg.readableBytes(), chunkSize);
+
+// Create slices of chunkSize from input msg and add it to the 
composite buffer.
+while (numChunks > 0) {
+  chunkBuf = msg.slice(msg.readerIndex(), currentChunkLen);
--- End diff --

`final ByteBuf chunkBuf ...`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sudheeshkatkam
Github user sudheeshkatkam commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r114215418
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java ---
@@ -116,6 +116,11 @@
   String BIT_AUTHENTICATION_ENABLED = 
"drill.exec.security.bit.auth.enabled";
   String BIT_AUTHENTICATION_MECHANISM = 
"drill.exec.security.bit.auth.mechanism";
   String USE_LOGIN_PRINCIPAL = 
"drill.exec.security.bit.auth.use_login_principal";
+  String USER_ENCRYPTION_SASL_ENABLED = 
"drill.exec.security.user.encryption.sasl.enabled";
+  String USER_ENCRYPTION_SASL_MAX_WRAPPED_SIZE = 
"drill.exec.security.user.encryption.sasl.max_wrapped_size";
--- End diff --

We should document this config parameter due to the change in name (from 
"maximum size of the raw send buffer in bytes" to max_wrapped_size).

From [Sasl.RAW_SEND_SIZE 
doc](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/javax/security/sasl/Sasl.java#151):
> The name of a property that specifies the maximum size of the raw send 
buffer in bytes of SaslClient/SaslServer. The property contains the string 
representation of an integer. The value of this property is negotiated between 
the client and server during the authentication exchange. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[HANGOUT] Topics for 5/2/17

2017-05-01 Thread Paul Rogers
Hi All,

Our bi-weekly hangout is tomorrow (5/2/17, 10 AM PT). Please respond with 
suggested topics. We will also ask for additional topics at the beginning of 
the hangout.

Thanks,

- Paul



[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113807185
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/EncryptionContext.java ---
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+public interface EncryptionContext {
+
+  boolean isEncryptionEnabled();
+
+  void setEncryption(boolean encryptionEnabled);
+
+  void setMaxWrappedSize(int maxWrappedChunkSize);
--- End diff --

Same as for MaxRawSendSize. MaxReceiveBufferSize is very generic name. It 
makes more sense to call it MaxWrappedSize since that's what it actually is 
from Drill's perspective.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113838109
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
+  int currentIndex = 0;
+  byte[] origMsg;
+  ByteBuf encryptedBuf;
+  byte[] wrappedMsg;

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113375341
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/control/ControllerImpl.java
 ---
@@ -24,13 +24,15 @@
 import org.apache.drill.exec.exception.DrillbitStartupException;
 import org.apache.drill.exec.memory.BufferAllocator;
 import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.exec.rpc.RpcMetrics;
 import org.apache.drill.exec.server.BootStrapContext;
 import org.apache.drill.exec.work.batch.ControlMessageHandler;
 
 import com.google.common.collect.Lists;
 import com.google.protobuf.Message;
 import com.google.protobuf.MessageLite;
 import com.google.protobuf.Parser;
+import org.apache.hadoop.io.ReadaheadPool;
--- End diff --

Removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113837851
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
--- End diff --

See above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113376924
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/ServerAuthenticationHandler.java
 ---
@@ -251,25 +255,67 @@ void process(SaslResponseContext context) 
throws Exception {
   private static , T extends EnumLite>
   void handleSuccess(final SaslResponseContext context, final 
SaslMessage.Builder challenge,
  final SaslServer saslServer) throws IOException {
-context.connection.changeHandlerTo(context.requestHandler);
-context.connection.finalizeSaslSession();
-context.sender.send(new Response(context.saslResponseType, 
challenge.build()));
 
-// setup security layers here..
+final S connection = context.connection;
+connection.changeHandlerTo(context.requestHandler);
+connection.finalizeSaslSession();
+
+// Check the negotiated property before sending the response back to 
client
+try {
+  final String negotiatedQOP = 
saslServer.getNegotiatedProperty(Sasl.QOP).toString();
--- End diff --

Please see comment above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113838037
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
+  int currentIndex = 0;
+  byte[] origMsg;
+  ByteBuf encryptedBuf;
+  byte[] wrappedMsg;

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113517138
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/ChunkCreationHandler.java ---
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.rpc;
+
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.lang.Math.min;
+
+/**
+ * Handler that converts an input ByteBuf into chunk size ByteBuf's and 
add it to the
+ * CompositeByteBuf as individual components. This is done irrespective of 
chunk mode is
+ * enabled or not.
+ */
+class ChunkCreationHandler extends MessageToMessageEncoder {
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  ChunkCreationHandler.class.getCanonicalName());
+
+  private final int chunkSize;
+
+  ChunkCreationHandler(int chunkSize) {
+checkArgument(chunkSize > 0);
+this.chunkSize = chunkSize;
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.CHUNK_CREATION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.CHUNK_CREATION_HANDLER + " 
handler");
+  }
+
+  @Override
+  protected void encode(ChannelHandlerContext ctx, ByteBuf msg, 
List out) throws Exception {
+
+if (RpcConstants.EXTRA_DEBUGGING) {
+  logger.debug("ChunkCreationHandler called with msg {} of size {} 
with chunkSize {}",
+  msg, msg.readableBytes(), chunkSize);
+}
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("Channel closed, skipping encode inside {}.", 
RpcConstants.CHUNK_CREATION_HANDLER);
+  msg.release();
+  return;
+}
+
+ByteBuf chunkBuf;
+
+// Calculate the number of chunks based on configured chunk size and 
input msg size
+int numChunks = (int) Math.ceil((double) msg.readableBytes() / 
chunkSize);
+
+// Initialize a composite buffer to hold numChunks chunk.
+CompositeByteBuf cbb = new CompositeByteBuf(ctx.alloc(), true, 
numChunks);
--- End diff --

used this based on reference in RpcEncoder. Will change in both the places 
since this is the recommended way.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113835682
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/RemoteConnection.java ---
@@ -51,6 +51,12 @@
 
   SocketAddress getRemoteAddress();
 
+  void addSecurityHandlers();
+
+  void incConnectionCounter();
--- End diff --

Removed the methods from interface.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113837790
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+  final byte[] wrappedMsg;
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  int wrappedMsgLength = lengthOctets.getInt(0);

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113834772
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/EncryptionContext.java ---
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+public interface EncryptionContext {
--- End diff --

As discussed keeping as is. The main purpose of changing to a separate 
interface for EncryptionContext was to keep RemoteConnection interface as is 
which was the concern in previous comment. Doing getEncryptionOptions to return 
the instance of EncryptionContextImpl gives control to the caller to set and 
get upon the instance. Since instance is private to a connection it should only 
be allowed to change the state and caller will only have exposure to the 
functions as needed.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113835839
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/RpcConstants.java ---
@@ -24,4 +24,29 @@ private RpcConstants(){}
 
   public static final boolean SOME_DEBUGGING = false;
   public static final boolean EXTRA_DEBUGGING = false;
+
+  // RPC Handler names
+  public static final String TIMEOUT_HANDLER = "timeout-handler";
+  public static final String PROTOCOL_DECODER = "protocol-decoder";
+  public static final String PROTOCOL_ENCODER = "protocol-encoder";
+  public static final String MESSAGE_DECODER = "message-decoder";
+  public static final String HANDSHAKE_HANDLER = "handshake-handler";
+  public static final String MESSAGE_HANDLER = "message-handler";
+  public static final String EXCEPTION_HANDLER = "exception-handler";
+  public static final String IDLE_STATE_HANDLER = "idle-state-handler";
+  public static final String SASL_DECRYPTION_HANDLER = 
"sasldecryption-handler";
+  public static final String SASL_ENCRYPTION_HANDLER = 
"saslencryption-handler";
+  public static final String LENGTH_DECODER_HANDLER = "length-decoder";
+  public static final String CHUNK_CREATION_HANDLER = 
"chunkcreation-handler";
+
+
+
+  // SASL RFC /4422 allows only 3 octets to specify the length of 
maximum encoded buffer each side can receive.
+  // Hence the maximum buffer size is capped at 16Mb i.e. 0XFF bytes.
+  public static final int MAX_WRAPPED_SIZE = 0XFF;
+
+  public static final int LENGTH_FIELD_OFFSET = 0;
+  public static final int LENGTH_FIELD_LENGTH = 4;
--- End diff --

Borrowed the terminology from `LengthFieldBasedFrameDecoder` which 
indicates the length of length field.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113836168
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+  final byte[] wrappedMsg;
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  int wrappedMsgLength = lengthOctets.getInt(0);

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113804812
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/AuthenticationOutcomeListener.java
 ---
@@ -243,4 +249,46 @@ public SaslMessage process(SaslChallengeContext 
context) throws Exception {
   }
 }
   }
+
+  private static void handleSuccess(SaslChallengeContext context) throws 
SaslException {
+final ClientConnection connection = context.connection;
+final SaslClient saslClient = connection.getSaslClient();
+
+try {
+  // Check if connection was marked for being secure then verify for 
negotiated QOP value for
+  // correctness.
+  final String negotiatedQOP = 
saslClient.getNegotiatedProperty(Sasl.QOP).toString();
+  final String expectedQOP = connection.isEncryptionEnabled()
+  ? SaslProperties.QualityOfProtection.PRIVACY.getSaslQop()
+  : SaslProperties.QualityOfProtection.AUTHENTICATION.getSaslQop();
+
+  if (!(negotiatedQOP.equals(expectedQOP))) {
+throw new SaslException(String.format("Mismatch in negotiated QOP 
value: %s and Expected QOP value: %s",
+negotiatedQOP, expectedQOP));
+  }
+
+  // Update the rawWrapChunkSize with the negotiated buffer size since 
we cannot call encode with more than
+  // negotiated size of buffer.
+  if (connection.isEncryptionEnabled()) {
+final int negotiatedRawSendSize = Integer.parseInt(
+
saslClient.getNegotiatedProperty(Sasl.RAW_SEND_SIZE).toString());
+if (negotiatedRawSendSize <= 0) {
+  throw new SaslException(String.format("Negotiated rawSendSize: 
%d is invalid. Please check the configured " +
+  "value of sasl.encryption.encodesize. It might be configured 
to a very small value.",
+  negotiatedRawSendSize));
+}
+connection.setWrapSizeLimit(negotiatedRawSendSize);
--- End diff --

Changed the name in SaslEncryptionHandler to wrapSizeLimit as well. Don't 
want to keep the name as maxSendBufferSize since i.e. very generic name and it 
is actually a rawSendSize from sasl perspective for wrap function. From Drill's 
perspective I think wrapSizeLimit makes more sense.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113835699
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/RpcConstants.java ---
@@ -24,4 +24,29 @@ private RpcConstants(){}
 
   public static final boolean SOME_DEBUGGING = false;
   public static final boolean EXTRA_DEBUGGING = false;
+
+  // RPC Handler names
+  public static final String TIMEOUT_HANDLER = "timeout-handler";
+  public static final String PROTOCOL_DECODER = "protocol-decoder";
+  public static final String PROTOCOL_ENCODER = "protocol-encoder";
+  public static final String MESSAGE_DECODER = "message-decoder";
+  public static final String HANDSHAKE_HANDLER = "handshake-handler";
+  public static final String MESSAGE_HANDLER = "message-handler";
+  public static final String EXCEPTION_HANDLER = "exception-handler";
+  public static final String IDLE_STATE_HANDLER = "idle-state-handler";
+  public static final String SASL_DECRYPTION_HANDLER = 
"sasldecryption-handler";
--- End diff --

Fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113836743
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+  final byte[] wrappedMsg;
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  int wrappedMsgLength = lengthOctets.getInt(0);

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113807219
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/EncryptionContext.java ---
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+public interface EncryptionContext {
+
+  boolean isEncryptionEnabled();
+
+  void setEncryption(boolean encryptionEnabled);
+
+  void setMaxWrappedSize(int maxWrappedChunkSize);
+
+  int getMaxWrappedSize();
+
+  void setWrapSizeLimit(int wrapSizeLimit);
--- End diff --

Same as before.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113378898
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/ChunkCreationHandler.java ---
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.rpc;
+
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.lang.Math.min;
+
+/**
+ * Handler that converts an input ByteBuf into chunk size ByteBuf's and 
add it to the
+ * CompositeByteBuf as individual components. This is done irrespective of 
chunk mode is
--- End diff --

Fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113837839
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+  final byte[] wrappedMsg;
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  int wrappedMsgLength = lengthOctets.getInt(0);

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113378601
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/ServerAuthenticationHandler.java
 ---
@@ -251,25 +255,67 @@ void process(SaslResponseContext context) 
throws Exception {
   private static , T extends EnumLite>
   void handleSuccess(final SaslResponseContext context, final 
SaslMessage.Builder challenge,
  final SaslServer saslServer) throws IOException {
-context.connection.changeHandlerTo(context.requestHandler);
-context.connection.finalizeSaslSession();
-context.sender.send(new Response(context.saslResponseType, 
challenge.build()));
 
-// setup security layers here..
+final S connection = context.connection;
+connection.changeHandlerTo(context.requestHandler);
+connection.finalizeSaslSession();
+
+// Check the negotiated property before sending the response back to 
client
+try {
+  final String negotiatedQOP = 
saslServer.getNegotiatedProperty(Sasl.QOP).toString();
+  final String expectedQOP = (connection.isEncryptionEnabled())
+  ? SaslProperties.QualityOfProtection.PRIVACY.getSaslQop()
+  : SaslProperties.QualityOfProtection.AUTHENTICATION.getSaslQop();
+
+  if (!(negotiatedQOP.equals(expectedQOP))) {
+throw new SaslException(String.format("Mismatch in negotiated QOP 
value: %s and Expected QOP value: %s",
+negotiatedQOP, expectedQOP));
+  }
+
+  // Update the rawWrapSendSize with the negotiated rawSendSize since 
we cannot call encode with more than the
+  // negotiated size of buffer
+  if (connection.isEncryptionEnabled()) {
+final int negotiatedRawSendSize = Integer.parseInt(
+
saslServer.getNegotiatedProperty(Sasl.RAW_SEND_SIZE).toString());
+if (negotiatedRawSendSize <= 0) {
+  throw new SaslException(String.format("Negotiated rawSendSize: 
%d is invalid. Please check the configured " +
+  "value of encryption.sasl.max_wrapped_size. It might be 
configured to a very small value.",
+  negotiatedRawSendSize));
+}
+connection.setWrapSizeLimit(negotiatedRawSendSize);
--- End diff --

Good catch!. Previous logic of having a new EncryptionContext object with 
each connection was taking care of it. Didn't realized this while making the 
change. Will have a separate EncryptionContext object for each connection and 
initialize it with the passed object in constructor.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113808458
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_DECRYPTION_HANDLER + " 
handler");
+  }
+
+  public void decode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.trace("Channel closed before decoding the message of {} 
bytes", msg.readableBytes());
+  msg.skipBytes(msg.readableBytes());
+  return;
+}
+
+try {
+  if(logger.isTraceEnabled()) {
+logger.trace("Trying to decrypt the encrypted message of size: {} 
with maxEncodedSize", msg.readableBytes());
+  }
+
+  final byte[] wrappedMsg;
+
+  // All the encrypted blocks are prefixed with it's length in network 
byte order (or BigEndian format). Netty's
+  // default Byte order of ByteBuf is Little Endian, so we cannot just 
do msg.getInt() as that will read the 4
+  // octets in little endian format.
+  //
+  // We will read the length of one complete encrypted chunk and 
decode that.
+  msg.getBytes(msg.readerIndex(), lengthOctets.array(), 0, 
RpcConstants.LENGTH_FIELD_LENGTH);
+  int wrappedMsgLength = lengthOctets.getInt(0);

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113838105
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
+  int currentIndex = 0;
+  byte[] origMsg;
+  ByteBuf encryptedBuf;
+  byte[] wrappedMsg;

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113522675
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
+  int currentIndex = 0;
+  byte[] origMsg;
+  ByteBuf encryptedBuf;
+  byte[] wrappedMsg;

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113376608
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/AuthenticationOutcomeListener.java
 ---
@@ -243,4 +249,46 @@ public SaslMessage process(SaslChallengeContext 
context) throws Exception {
   }
 }
   }
+
+  private static void handleSuccess(SaslChallengeContext context) throws 
SaslException {
+final ClientConnection connection = context.connection;
+final SaslClient saslClient = connection.getSaslClient();
+
+try {
+  // Check if connection was marked for being secure then verify for 
negotiated QOP value for
+  // correctness.
+  final String negotiatedQOP = 
saslClient.getNegotiatedProperty(Sasl.QOP).toString();
--- End diff --

Null QOP is not auth. Instead when we pass QOP as null the mechanism will 
use the default QOP value for negotiation which is auth. So 
getNegotiatedProperty will always return a valid object.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113808826
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
--- End diff --

Then in for loop we will always call `cbb.numComponents()` to check if 
index is in limits. 

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113833301
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslDecryptionHandler.java ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageDecoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+/**
+ * Handler to Decrypt the input ByteBuf. It expects input to be in format 
where it has length of the bytes to
+ * decode in network order and actual encrypted bytes. The handler reads 
the length and then reads the
+ * required bytes to pass it to unwrap function for decryption. The 
decrypted buffer is copied to a new
+ * ByteBuf and added to out list.
+ * 
+ * Example:
+ * Input - [EBLN1, EB1, EBLN2, EB2] --> ByteBuf with repeated 
combination of encrypted byte length
+ * in network order (EBLNx) and encrypted bytes (EB)
+ * Output - [DB1] --> Decrypted ByteBuf of first chunk.(EB1)
+ * 
+ */
+class SaslDecryptionHandler extends MessageToMessageDecoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslDecryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxEncodedSize;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  private final byte[] encodedMsg;
+
+  private final ByteBuffer lengthOctets;
+
+  SaslDecryptionHandler(SaslCodec saslCodec, int maxEncodedSize, 
OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.outOfMemoryHandler = oomHandler;
+this.maxEncodedSize = maxEncodedSize;
+
+// Allocate the byte array of maxEncodedSize to reuse for each encoded 
packet received on this connection
+// Maximum value of maxEncodedSize can be 16MB (i.e. OXFF)
+encodedMsg = new byte[maxEncodedSize];
--- End diff --

Thanks for pointing out. The intent of having the pre-allocated buffer was 
to reuse the same heap buffer to copy the encoded bytes from DrillBuf to heap 
buffer rather than allocating new heap buffer all the times which can lead to 
GC thrashing. But 16MB being a too big of a buffer per connection, I did some 
perf test for the wrap function and it performs better with smaller chunks. So 
I will default the buffer size to 64K for now.  It's a configurable parameter 
which can be changed as needed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113838045
  
--- Diff: 
exec/rpc/src/main/java/org/apache/drill/exec/rpc/SaslEncryptionHandler.java ---
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.rpc;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToMessageEncoder;
+
+import org.apache.drill.exec.exception.OutOfMemoryException;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.List;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+
+/**
+ * Handler to wrap the input Composite ByteBuf components separately and 
append the encrypted length for each
+ * component in the output ByteBuf. If there are multiple components in 
the input ByteBuf then each component will be
+ * encrypted individually and added to output ByteBuf with it's length 
prepended.
+ * 
+ * Example:
+ * Input ByteBuf  --> [B1,B2] - 2 component ByteBuf of 16K byte each.
+ * Output ByteBuf --> [[EBLN1, EB1], [EBLN2, EB2]] - List of ByteBuf's 
with each ByteBuf containing
+ *Encrypted Byte Length (EBLNx) in network order as 
per SASL RFC and Encrypted Bytes (EBx).
+ * 
+ */
+class SaslEncryptionHandler extends MessageToMessageEncoder {
+
+  private static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(
+  SaslEncryptionHandler.class.getCanonicalName());
+
+  private final SaslCodec saslCodec;
+
+  private final int maxRawWrapSize;
+
+  private byte[] origMsgBuffer;
+
+  private final ByteBuffer lengthOctets;
+
+  private final OutOfMemoryHandler outOfMemoryHandler;
+
+  /**
+   * We don't provide preference to allocator to use heap buffer instead 
of direct buffer.
+   * Drill uses it's own buffer allocator which doesn't support heap 
buffer allocation. We use
+   * Drill buffer allocator in the channel.
+   */
+  SaslEncryptionHandler(SaslCodec saslCodec, final int maxRawWrapSize, 
final OutOfMemoryHandler oomHandler) {
+this.saslCodec = saslCodec;
+this.maxRawWrapSize = maxRawWrapSize;
+this.outOfMemoryHandler = oomHandler;
+
+// The maximum size of the component will be maxRawWrapSize. Since 
this is maximum size we can allocate once
+// and reuse it for each component encode.
+origMsgBuffer = new byte[this.maxRawWrapSize];
+lengthOctets = ByteBuffer.allocate(RpcConstants.LENGTH_FIELD_LENGTH);
+lengthOctets.order(ByteOrder.BIG_ENDIAN);
+  }
+
+  @Override
+  public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+super.handlerAdded(ctx);
+logger.trace("Added " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler!");
+  }
+
+  @Override
+  public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+super.handlerRemoved(ctx);
+logger.trace("Removed " + RpcConstants.SASL_ENCRYPTION_HANDLER + " 
handler");
+  }
+
+  public void encode(ChannelHandlerContext ctx, ByteBuf msg, List 
out) throws IOException {
+
+if (!ctx.channel().isOpen()) {
+  logger.debug("In " + RpcConstants.SASL_ENCRYPTION_HANDLER + " and 
channel is not open. " +
+  "So releasing msg memory before encryption.");
+  msg.release();
+  return;
+}
+
+try {
+  // If encryption is enabled then this handler will always get 
ByteBuf of type Composite ByteBuf
+  checkArgument(msg instanceof CompositeByteBuf);
+
+  final CompositeByteBuf cbb = (CompositeByteBuf) msg;
+  int numComponents = cbb.numComponents();
+  int currentIndex = 0;
+  byte[] origMsg;
+  ByteBuf encryptedBuf;
+  byte[] wrappedMsg;

[GitHub] drill pull request #773: DRILL-4335: Apache Drill should support network enc...

2017-05-01 Thread sohami
Github user sohami commented on a diff in the pull request:

https://github.com/apache/drill/pull/773#discussion_r113833928
  
--- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/rpc/security/AuthenticationOutcomeListener.java
 ---
@@ -143,16 +149,16 @@ public void interrupted(InterruptedException e) {
 completionListener.interrupted(e);
   }
 
-  private static class SaslChallengeContext {
+  private static class SaslChallengeContext {
--- End diff --

Fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (DRILL-5456) StringIndexOutOfBoundsException when converting a JSON array to UTF-8

2017-05-01 Thread Khurram Faraaz (JIRA)
Khurram Faraaz created DRILL-5456:
-

 Summary: StringIndexOutOfBoundsException when converting a JSON 
array to UTF-8
 Key: DRILL-5456
 URL: https://issues.apache.org/jira/browse/DRILL-5456
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Affects Versions: 1.11.0
Reporter: Khurram Faraaz


Convert a JSON array to UTF-8 using CONVERT_TO function results in 
StringIndexOutOfBoundsException
Apache Drill 1.11.0 commit ID: 3e8b01d

Data used in test
{noformat}
[root@centos-01 ~]# cat rptd_count.json
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":1}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":2}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":3}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":4}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":5}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":6}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":7}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":8}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":9}
{"arr":[0,1,2,3,4,5,6,7,8,9,10],"id":10}
[root@centos-01 ~]#
{noformat}

{noformat}
0: jdbc:drill:schema=dfs.tmp> select convert_to(t.arr,'UTF-8') c, id from 
`rptd_count.json` t;
Error: SYSTEM ERROR: StringIndexOutOfBoundsException: String index out of 
range: -3

[Error Id: 056a13c0-6c9f-403e-877e-040e907d6581 on centos-01.qa.lab:31010] 
(state=,code=0)
0: jdbc:drill:schema=dfs.tmp>
{noformat}

Data from the JSON file
{noformat}
0: jdbc:drill:schema=dfs.tmp> select * from `rptd_count.json`;
+---+-+
|arr| id  |
+---+-+
| [0,1,2,3,4,5,6,7,8,9,10]  | 1   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 2   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 3   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 4   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 5   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 6   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 7   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 8   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 9   |
| [0,1,2,3,4,5,6,7,8,9,10]  | 10  |
+---+-+
10 rows selected (0.224 seconds)
{noformat}

Stack trace from drillbit.log

{noformat}
2017-05-01 19:32:34,209 [26f872ad-62a3-d7a7-aec1-9c7d937a2416:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
26f872ad-62a3-d7a7-aec1-9c7d937a2416: select convert_to(t.arr,'UTF-8') c, id 
from `rptd_count.json` t
...
2017-05-01 19:32:34,328 [26f872ad-62a3-d7a7-aec1-9c7d937a2416:foreman] ERROR 
o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR: 
StringIndexOutOfBoundsException: String index out of range: -3


[Error Id: 056a13c0-6c9f-403e-877e-040e907d6581 on centos-01.qa.lab:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
StringIndexOutOfBoundsException: String index out of range: -3


[Error Id: 056a13c0-6c9f-403e-877e-040e907d6581 on centos-01.qa.lab:31010]
at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
 ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:847)
 [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:977) 
[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:297) 
[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_91]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
exception during fragment initialization: String index out of range: -3
... 4 common frames omitted
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of 
range: -3
at java.lang.String.substring(String.java:1931) ~[na:1.8.0_91]
at 
org.apache.drill.exec.planner.logical.PreProcessLogicalRel.getConvertFunctionException(PreProcessLogicalRel.java:244)
 ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.logical.PreProcessLogicalRel.visit(PreProcessLogicalRel.java:148)
 ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.calcite.rel.logical.LogicalProject.accept(LogicalProject.java:129) 
~[calcite-core-1.4.0-drill-r21.jar:1.4.0-drill-r21]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.preprocessNode(DefaultSqlHandler.java:641)
 ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:196)
 ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at 
org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:164)
 

Parquet, Arrow, and Drill Roadmap

2017-05-01 Thread John Omernik
Hey all - I posted this to both dev and user as I could mentally make the
argument for both,

Sorry if this is answered somewhere already. I know in the past, there have
been discussions around using two different readers for Parquet, and
performance gains/losses, issues. etc.

Right now, the store.parquet.use_new_reader is set to false (1.10) and I
was trying to get more information about this and what the eventual roadmap
for drill will be...

So, in the docs I see use_new_reader is set to be not supported in this
release.  What I am looking for is a little information on:

- What the two readers are (is one a special drill thing, is the other  a
standard reader from the parquet project?)
- What is the eventual goal here... to be able to use and switch between
both? To provide the option? To have code parity with another project?
- Do either of the readers work with Arrow?
- How does Arrow and Parquet readers fit together?
- Will the readers ever converge?

I have other questions too, but these are examples of where I am coming
from. Is there a good starting place for my research on this subject?

Thanks,
John


[jira] [Created] (DRILL-5455) Sporadic JDBC connection failure (Drill 1.10)

2017-05-01 Thread Rakesh (JIRA)
Rakesh created DRILL-5455:
-

 Summary: Sporadic JDBC connection failure (Drill 1.10)
 Key: DRILL-5455
 URL: https://issues.apache.org/jira/browse/DRILL-5455
 Project: Apache Drill
  Issue Type: Bug
 Environment: JDK 1.7
Reporter: Rakesh


My invocation to get JDBC connection succeeds 8 out of 10 times and fails the 
other times with the below message. (Of course it's not predictable or 
reproducible hence don't know the circumstances to reproduce it):

JavaException: ../quartz/src/javapy/JavaPy.cpp:449: JavaException: 
java.sql.SQLException: Failure in connecting to Drill: 
oadd.org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. 
()
Java stacktrace:
at 
org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:161)
at 
org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:70)
at 
org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at 
oadd.org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (DRILL-5454) REPEATED_COUNT throws error when used in where clause

2017-05-01 Thread Rakesh (JIRA)
Rakesh created DRILL-5454:
-

 Summary: REPEATED_COUNT throws error when used in where clause
 Key: DRILL-5454
 URL: https://issues.apache.org/jira/browse/DRILL-5454
 Project: Apache Drill
  Issue Type: Bug
Reporter: Rakesh


I have a parquet file with an array column, when I use the function 
'REPEATED_COUNT' in select column it works fine, but when used in where clause 
I get an error: 
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
SchemaChangeException: Failure while trying to materialize incoming schema. 
Errors: Error in expression at index 2. Error: Missing function implementation: 
[repeated_count(INT-OPTIONAL)]. Full expression: null.. Fragment 1:4 [Error Id: 
1a9b034c-949a-4faa-9185-55d40e4851e7 on uswxapcsd043.ussdnve.baml.com:31010] 
(org.apache.drill.exec.exception.SchemaChangeException) Failure while trying to 
materialize incoming schema. Errors: Error in expression at index 2. Error: 
Missing function implementation: [repeated_count(INT-OPTIONAL)]. Full 
expression: null..



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: DRAFT - ASF Board report for Apache Drill (May 2017)

2017-05-01 Thread Aman Sinha
LGTM. 

-Aman

On 5/1/17, 10:34 AM, "Sudheesh Katkam"  wrote:

+1

> On May 1, 2017, at 2:10 AM, Parth Chandra  wrote:
> 
> Hi Drill devs,
> 
> The quarterly report to the Apache Board is due again. Below is a draft of
> the report I plan to file.
> Any suggestions/additions from the dev team? I'll file this by the 3rd so
> please provide comments by then.
> 
> Thanks
> 
> Parth
> 
> --- Report Begins ---
> 
> ## Description:
> - Drill is a Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud
> Storage
> 
> ## Issues:
> - There are no issues requiring board attention at this time
> 
> ## Activity:
> - Since the last board report, Drill has released version 1.10
> - Drill has added many improvements since the last report including
>- Support for left outer joins with nested loop joins
>– Support for ANSI_QUOTES option to allow alternatives to backtick for
> quoting strings
>– New sub-operator test framework
>– Fixed missing query text in prepared statement
>– Fixed new external sort when data contains map type columns
>– Improved query planning time against MapR-DB tables via caching of
> row count metadata
>– Improved query planning time by using runtime metadata dispatchers
> 
> ## Health report:
> - The project is healthy. Development activity is at the same level as the
> previous period. Activity on the dev mailing list, JIRAs, and pull 
requests
> is the same or higher than in the previous period. Three new committers
> were added in the last period.
> 
> 
> ## PMC changes:
> 
> - Currently 17 PMC members.
> - No new PMC members added in the last 3 months
> - One PMC member has resigned due to lack of time.
> - Last PMC addition was Sudheesh Katkam on Wed Oct 05 2016
> 
> ## Committer base changes:
> 
> - Currently 33 committers.
> - New commmitters:
>- Abhishek Girish was added as a committer on Mon Feb 13 2017
>- Arina Ielchiieva was added as a committer on Thu Feb 23 2017
>- Rahul Kumar Challapalli was added as a committer on Wed Feb 15 2017
> 
> ## Releases:
> 
> - 1.10.0 was released on Tue Mar 14 2017
> 
> ## Mailing list activity:
> 
> 
> - dev@drill.apache.org:
>- 436 subscribers (up 0 in the last 3 months):
>- 2066 emails sent to list (1758 in previous quarter)
> 
> - iss...@drill.apache.org:
>- 20 subscribers (up 0 in the last 3 months):
>- 3118 emails sent to list (2499 in previous quarter)
> 
> - u...@drill.apache.org:
>- 586 subscribers (up 9 in the last 3 months):
>- 382 emails sent to list (374 in previous quarter)
> 
> 
> ## JIRA activity:
> 
> - 220 JIRA tickets created in the last 3 months
> - 166 JIRA tickets closed/resolved in the last 3 months
> 
> --- Report Ends ---





[GitHub] drill pull request #819: DRILL-5419: Calculate return string length for lite...

2017-05-01 Thread jinfengni
Github user jinfengni commented on a diff in the pull request:

https://github.com/apache/drill/pull/819#discussion_r114162499
  
--- Diff: common/src/main/java/org/apache/drill/common/types/Types.java ---
@@ -636,43 +658,63 @@ public static String toString(final MajorType type) {
 
   /**
* Get the precision of given type.
-   * @param majorType
-   * @return
+   *
+   * @param majorType major type
+   * @return precision value
*/
   public static int getPrecision(MajorType majorType) {
-MinorType type = majorType.getMinorType();
-
-if (type == MinorType.VARBINARY || type == MinorType.VARCHAR) {
-  return 65536;
-}
-
 if (majorType.hasPrecision()) {
   return majorType.getPrecision();
 }
 
-return 0;
+return isScalarStringType(majorType) ? MAX_VARCHAR_LENGTH : UNDEFINED;
--- End diff --

The way this PR calculates the maximum length is not from "guess". The 
metadata information leveraged in this PR is in the "query" itself ( the 
function appearing in the query, or the string literal in the query). It does 
not apply to all the queries; only when the query provides enough information 
Drill will calculate the maximum length. In all other cases, Drill will perform 
same as before.

The maximum length (L1) calculated in this PR is different different from 
what you describe in the case of scanner (L2).  L2 <= L1. In the example of 
scanner with no metadata available, there would be no schema change; the 
precision is unset in type field, same as before. 
 
@arina-ielchiieva can correct me if my understanding is wrong. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: DRAFT - ASF Board report for Apache Drill (May 2017)

2017-05-01 Thread Sudheesh Katkam
+1

> On May 1, 2017, at 2:10 AM, Parth Chandra  wrote:
> 
> Hi Drill devs,
> 
> The quarterly report to the Apache Board is due again. Below is a draft of
> the report I plan to file.
> Any suggestions/additions from the dev team? I'll file this by the 3rd so
> please provide comments by then.
> 
> Thanks
> 
> Parth
> 
> --- Report Begins ---
> 
> ## Description:
> - Drill is a Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud
> Storage
> 
> ## Issues:
> - There are no issues requiring board attention at this time
> 
> ## Activity:
> - Since the last board report, Drill has released version 1.10
> - Drill has added many improvements since the last report including
>- Support for left outer joins with nested loop joins
>– Support for ANSI_QUOTES option to allow alternatives to backtick for
> quoting strings
>– New sub-operator test framework
>– Fixed missing query text in prepared statement
>– Fixed new external sort when data contains map type columns
>– Improved query planning time against MapR-DB tables via caching of
> row count metadata
>– Improved query planning time by using runtime metadata dispatchers
> 
> ## Health report:
> - The project is healthy. Development activity is at the same level as the
> previous period. Activity on the dev mailing list, JIRAs, and pull requests
> is the same or higher than in the previous period. Three new committers
> were added in the last period.
> 
> 
> ## PMC changes:
> 
> - Currently 17 PMC members.
> - No new PMC members added in the last 3 months
> - One PMC member has resigned due to lack of time.
> - Last PMC addition was Sudheesh Katkam on Wed Oct 05 2016
> 
> ## Committer base changes:
> 
> - Currently 33 committers.
> - New commmitters:
>- Abhishek Girish was added as a committer on Mon Feb 13 2017
>- Arina Ielchiieva was added as a committer on Thu Feb 23 2017
>- Rahul Kumar Challapalli was added as a committer on Wed Feb 15 2017
> 
> ## Releases:
> 
> - 1.10.0 was released on Tue Mar 14 2017
> 
> ## Mailing list activity:
> 
> 
> - dev@drill.apache.org:
>- 436 subscribers (up 0 in the last 3 months):
>- 2066 emails sent to list (1758 in previous quarter)
> 
> - iss...@drill.apache.org:
>- 20 subscribers (up 0 in the last 3 months):
>- 3118 emails sent to list (2499 in previous quarter)
> 
> - u...@drill.apache.org:
>- 586 subscribers (up 9 in the last 3 months):
>- 382 emails sent to list (374 in previous quarter)
> 
> 
> ## JIRA activity:
> 
> - 220 JIRA tickets created in the last 3 months
> - 166 JIRA tickets closed/resolved in the last 3 months
> 
> --- Report Ends ---



[GitHub] drill pull request #820: DRILL-5391: CTAS: make folder and file permission c...

2017-05-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/drill/pull/820


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] drill issue #820: DRILL-5391: CTAS: make folder and file permission configur...

2017-05-01 Thread jinfengni
Github user jinfengni commented on the issue:

https://github.com/apache/drill/pull/820
  
+1



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


DRAFT - ASF Board report for Apache Drill (May 2017)

2017-05-01 Thread Parth Chandra
Hi Drill devs,

The quarterly report to the Apache Board is due again. Below is a draft of
the report I plan to file.
Any suggestions/additions from the dev team? I'll file this by the 3rd so
please provide comments by then.

Thanks

Parth

--- Report Begins ---

## Description:
 - Drill is a Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud
Storage

## Issues:
 - There are no issues requiring board attention at this time

## Activity:
 - Since the last board report, Drill has released version 1.10
 - Drill has added many improvements since the last report including
- Support for left outer joins with nested loop joins
– Support for ANSI_QUOTES option to allow alternatives to backtick for
quoting strings
– New sub-operator test framework
– Fixed missing query text in prepared statement
– Fixed new external sort when data contains map type columns
– Improved query planning time against MapR-DB tables via caching of
row count metadata
– Improved query planning time by using runtime metadata dispatchers

## Health report:
 - The project is healthy. Development activity is at the same level as the
previous period. Activity on the dev mailing list, JIRAs, and pull requests
is the same or higher than in the previous period. Three new committers
were added in the last period.


## PMC changes:

 - Currently 17 PMC members.
 - No new PMC members added in the last 3 months
 - One PMC member has resigned due to lack of time.
 - Last PMC addition was Sudheesh Katkam on Wed Oct 05 2016

## Committer base changes:

 - Currently 33 committers.
 - New commmitters:
- Abhishek Girish was added as a committer on Mon Feb 13 2017
- Arina Ielchiieva was added as a committer on Thu Feb 23 2017
- Rahul Kumar Challapalli was added as a committer on Wed Feb 15 2017

## Releases:

 - 1.10.0 was released on Tue Mar 14 2017

## Mailing list activity:


 - dev@drill.apache.org:
- 436 subscribers (up 0 in the last 3 months):
- 2066 emails sent to list (1758 in previous quarter)

 - iss...@drill.apache.org:
- 20 subscribers (up 0 in the last 3 months):
- 3118 emails sent to list (2499 in previous quarter)

 - u...@drill.apache.org:
- 586 subscribers (up 9 in the last 3 months):
- 382 emails sent to list (374 in previous quarter)


## JIRA activity:

 - 220 JIRA tickets created in the last 3 months
 - 166 JIRA tickets closed/resolved in the last 3 months

--- Report Ends ---