[GitHub] drill issue #1227: DRILL-6236: batch sizing for hash join
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1227 @ppadma Please fix travis failure ---
[GitHub] drill issue #1234: DRILL-5927: Fixed memory leak in TestBsonRecordReader, an...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1234 @parthchandra Please let me know if you have any comments. ---
[GitHub] drill pull request #1234: DRILL-5927: Fixed memory leak in TestBsonRecordRea...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1234#discussion_r183956964 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java --- @@ -45,21 +46,24 @@ import org.bson.BsonTimestamp; import org.bson.BsonWriter; import org.bson.types.ObjectId; -import org.junit.AfterClass; -import org.junit.BeforeClass; +import org.junit.After; +import org.junit.Before; import org.junit.Test; -public class TestBsonRecordReader extends BaseTestQuery { - private static VectorContainerWriter writer; - private static TestOutputMutator mutator; - private static BsonRecordReader bsonReader; +public class TestBsonRecordReader { + private BufferAllocator allocator; + private VectorContainerWriter writer; + private TestOutputMutator mutator; + private DrillBuf buffer; + private BsonRecordReader bsonReader; - @BeforeClass - public static void setUp() { -BufferAllocator bufferAllocator = getDrillbitContext().getAllocator(); -mutator = new TestOutputMutator(bufferAllocator); + @Before + public void setUp() { +allocator = new RootAllocator(400_000); --- End diff -- @vrozov Made the changes ---
[GitHub] drill issue #1234: DRILL-5927: Fixed memory leak in TestBsonRecordReader, an...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1234 @vrozov addressed comments ---
[GitHub] drill pull request #1234: DRILL-5927: Fixed memory leak in TestBsonRecordRea...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1234#discussion_r183846385 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java --- @@ -49,17 +50,20 @@ import org.junit.BeforeClass; import org.junit.Test; -public class TestBsonRecordReader extends BaseTestQuery { +public class TestBsonRecordReader { + private static BufferAllocator allocator; --- End diff -- Initializing in Before and closing in After works. Changed the variables to be non static as well. ---
[GitHub] drill pull request #1234: DRILL-5927: Fixed memory leak in TestBsonRecordRea...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1234#discussion_r183846152 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java --- @@ -49,17 +50,20 @@ import org.junit.BeforeClass; import org.junit.Test; -public class TestBsonRecordReader extends BaseTestQuery { +public class TestBsonRecordReader { + private static BufferAllocator allocator; private static VectorContainerWriter writer; private static TestOutputMutator mutator; + private static DrillBuf buffer; private static BsonRecordReader bsonReader; @BeforeClass public static void setUp() { -BufferAllocator bufferAllocator = getDrillbitContext().getAllocator(); -mutator = new TestOutputMutator(bufferAllocator); +allocator = new RootAllocator(9_000_00); --- End diff -- After addressing review comments and recompiling I observed different behavoir. Now the tests pass with 400kb on the root allocator. But reducing it to 300kb causes an IOB. I have filed an issue here https://issues.apache.org/jira/browse/DRILL-6352 ---
[GitHub] drill issue #1234: DRILL-5927: Fixed memory leak in TestBsonRecordReader, an...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1234 @vrozov addressed comments ---
[GitHub] drill pull request #1234: DRILL-5927: Fixed memory leak in TestBsonRecordRea...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1234#discussion_r183611338 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java --- @@ -49,17 +50,20 @@ import org.junit.BeforeClass; import org.junit.Test; -public class TestBsonRecordReader extends BaseTestQuery { +public class TestBsonRecordReader { + private static BufferAllocator allocator; private static VectorContainerWriter writer; private static TestOutputMutator mutator; + private static DrillBuf buffer; private static BsonRecordReader bsonReader; @BeforeClass public static void setUp() { -BufferAllocator bufferAllocator = getDrillbitContext().getAllocator(); -mutator = new TestOutputMutator(bufferAllocator); +allocator = new RootAllocator(100_000_000); --- End diff -- After testing just now, the lowest I could push this was 9,000,000 bytes. Going to 8,000,000 bytes causes an OOM. It looks like the BsonRecordReader allocates a bunch of value vectors. I will adjust this to 9,000,000. ``` org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate buffer of size 8192 due to memory limit (80). Current allocation: 795648 at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:236) at org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:211) at org.apache.drill.exec.vector.UInt1Vector.reallocRaw(UInt1Vector.java:262) at org.apache.drill.exec.vector.UInt1Vector.reAlloc(UInt1Vector.java:251) at org.apache.drill.exec.vector.UInt1Vector$Mutator.setSafe(UInt1Vector.java:480) at org.apache.drill.exec.vector.NullableVarCharVector$Mutator.setSafe(NullableVarCharVector.java:608) at org.apache.drill.exec.vector.complex.impl.NullableVarCharWriterImpl.write(NullableVarCharWriterImpl.java:110) at org.apache.drill.exec.store.bson.BsonRecordReader.writeString(BsonRecordReader.java:276) at org.apache.drill.exec.store.bson.BsonRecordReader.writeBinary(BsonRecordReader.java:205) at org.apache.drill.exec.store.bson.BsonRecordReader.writeToListOrMap(BsonRecordReader.java:117) at org.apache.drill.exec.store.bson.BsonRecordReader.write(BsonRecordReader.java:75) at org.apache.drill.exec.store.bson.TestBsonRecordReader.testBinaryTypes(TestBsonRecordReader.java:244) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.lang.reflect.Method.invoke(Method.java:498) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) ``` ---
[GitHub] drill pull request #1234: DRILL-5927: Fixed memory leak in TestBsonRecordRea...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1234#discussion_r183608064 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java --- @@ -49,17 +50,20 @@ import org.junit.BeforeClass; import org.junit.Test; -public class TestBsonRecordReader extends BaseTestQuery { +public class TestBsonRecordReader { + private static BufferAllocator allocator; --- End diff -- JUnit requires tests annotated with @BeforeClass and @AfterClass to be static so these variables have to be static. Currently the Drill unit tests do not support concurrent execution of methods within a test class within the same process. Drill has surefire launch multiple test processes and the the test classes to execute are divided among the test processes. Within each forked surefire process tests are executed sequentially, so this will not be an issue. ---
[GitHub] drill issue #1220: DRILL-6328: Consolidate developer docs in docs folder.
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1220 replaced with TBA as suggested. ---
[GitHub] drill issue #1234: DRILL-5927: Fixed memory leak in TestBsonRecordReader, an...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1234 @parthchandra Please review ---
[GitHub] drill pull request #1234: DRILL-5927: Fixed memory leak in TestBsonRecordRea...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1234 DRILL-5927: Fixed memory leak in TestBsonRecordReader, and sped up th⦠â¦e test. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-5927 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1234.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1234 commit 7d431e20c9cdb0038116131aecdc26a1f60e8775 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-20T23:52:59Z DRILL-5927: Fixed memory leak in TestBsonRecordReader, and sped up the test. ---
[GitHub] drill issue #1221: DRILL-6323: Fix license headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1221 +1 LGTM ---
[GitHub] drill issue #1222: DRILL-6341: Fixed failing tests for mongodb storage plugi...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1222 @cgivre Please verify this patch fixes the issue for you as well. @arina-ielchiieva ---
[GitHub] drill pull request #1222: DRILL-6341: Fixed failing tests for mongodb storag...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1222 DRILL-6341: Fixed failing tests for mongodb storage plugin ## Issue The Mongodb storage plugin tests fail on osx 10.13.x because MongoDB 3.2 itself has a bug on osx 10.13.x . This is a known and documented issue here https://github.com/github/hub/issues/1405 . The issue occurs consistently on my laptop. ## Fix - The version of MongDB used for tests needs to be updated to 3.4. Why 3.4 and not 3.6? There is some discussion here that people are experiencing issues with 3.6 on Windows and that 3.4 is the safest option. - The **de.flapdoodle.embed.mongo** plugin needs to be updated to the latest 2.x version which supports mongo 3.4. Currently we are using 1.x which doesn't support Mongo 3.4. - In Mongo db 3.4 Config servers need to be part of a replica set. Also **de.flapdoodle.embed.mongo** 2.x requires a replica set for config servers. And the minimum number of servers in a replica set is 3, so I launch 3 config servers now. - There is a bug with **de.flapdoodle.embed.mongo** where config servers need to be initialized as replica sets so you need to add them to your complete list of replica sets to be initialized. But if you provide the configServers to MongosSystemForTestFactory it will try to start the config servers again! To work around this bug I pass an empty list of config servers to MongosSystemForTestFactory. This ensures each config server is only started once. - The **wiredTiger** storage engine is required to run replica sets. To ensure we don't revert back to **mmapv1** storage engine I explicitly configure **wiredTiger** on all the configs. - **useNoPrealloc** and ** useSmallFiles** default to true and must be set to false since they are configs for **mmapv1** and **de.flapdoodle.embed.mongo** passes these options to MongoDB if they are true. When MongoDB sees **mmapv1** options it throws an error. To prevent all this I have to explicitly set these two options to false. - The config server and shard server flags need to be set correctly on the configs, so I do that now. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6341 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1222.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1222 commit 0dde8c99c92281f8993ee598af22480afcf67a28 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-17T20:29:14Z DRILL-6341: Fixed failing tests for mongodb storage plugin by upgrading MongoDB version. ---
[GitHub] drill issue #1220: DRILL-6328: Consolidate developer docs in docs folder.
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1220 @paul-rogers @arina-ielchiieva Please review. After this goes in I will add the additional testing docs I wrote to Paul's existing documentation. ---
[GitHub] drill pull request #1220: DRILL-6328: Consolidate developer docs in docs fol...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1220 DRILL-6328: Consolidate developer docs in docs folder. Consolidating the existing developer docs in the docs folder. This is just a copy paste from Paul's wiki https://github.com/paul-rogers/drill/wiki/Testing-Tips. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6328 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1220.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1220 commit cc5ce0c83aba34396ecf257f447727cbb72d6da1 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-18T00:36:57Z DRILL-6328: Consolidated developer docs in the docs folder. commit 5fb2c42a9e09bb36caca791b77b550cfdba32919 Author: Paul Rogers <progers@...> Date: 2018-04-18T00:43:02Z DRILL-6328: Adding unit testing docs. ---
[GitHub] drill pull request #1215: DRILL-6338: Do not skip license maven plugin when ...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1215#discussion_r182266896 --- Diff: protocol/pom.xml --- @@ -149,8 +152,12 @@ com.mycila license-maven-plugin + --- End diff -- Thanks for catching. Removed, verified that `mvn process-sources -P proto-compile` and `mvn clean install -P proto-compile` generated code with the headers and that `mvn process-sources -P proto-compile -Dlicense.skip=true` generates sources without license headers. ---
[GitHub] drill issue #1215: DRILL-6338: Do not skip license maven plugin when formatt...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1215 @vrozov @arina-ielchiieva Please review. ---
[GitHub] drill issue #1208: DRILL-6295: PartitionerDecorator may close partitioners w...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1208 +1 LGTM I do not see anymore race conditions ---
[GitHub] drill pull request #1215: DRILL-6338: Do not skip license maven plugin when ...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1215 DRILL-6338: Do not skip license maven plugin when formatting generate⦠â¦d protobuf files. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6338 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1215.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1215 commit 64d5bece38275375d5e2f24a64dd0643a5d92964 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-17T18:57:40Z DRILL-6338: Do not skip license maven plugin when formatting generated protobuf files. ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181915654 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -262,68 +280,122 @@ public FlushBatchesHandlingClass(boolean isLastBatch, boolean schemaChanged) { } @Override -public void execute(Partitioner part) throws IOException { +public void execute(Partitioner part) throws IOException, InterruptedException { part.flushOutgoingBatches(isLastBatch, schemaChanged); } } /** - * Helper class to wrap Runnable with customized naming - * Exception handling + * Helper class to wrap Runnable with cancellation and waiting for completion support * */ - private static class CustomRunnable implements Runnable { + private static class PartitionerTask implements Runnable { + +private enum STATE { + NEW, + COMPLETING, + NORMAL, + EXCEPTIONAL, + CANCELLED, + INTERRUPTING, + INTERRUPTED +} + +private final AtomicReference state; +private final AtomicReference runner; +private final PartitionerDecorator partitionerDecorator; +private final AtomicInteger count; -private final String parentThreadName; -private final CountDownLatch latch; private final GeneralExecuteIface iface; -private final Partitioner part; +private final Partitioner partitioner; private CountDownLatchInjection testCountDownLatch; -private volatile IOException exp; +private volatile ExecutionException exception; -public CustomRunnable(final String parentThreadName, final CountDownLatch latch, final GeneralExecuteIface iface, -final Partitioner part, CountDownLatchInjection testCountDownLatch) { - this.parentThreadName = parentThreadName; - this.latch = latch; +public PartitionerTask(PartitionerDecorator partitionerDecorator, GeneralExecuteIface iface, Partitioner partitioner, AtomicInteger count, CountDownLatchInjection testCountDownLatch) { + state = new AtomicReference<>(STATE.NEW); + runner = new AtomicReference<>(); + this.partitionerDecorator = partitionerDecorator; this.iface = iface; - this.part = part; + this.partitioner = partitioner; + this.count = count; this.testCountDownLatch = testCountDownLatch; } @Override public void run() { - // Test only - Pause until interrupted by fragment thread - try { -testCountDownLatch.await(); - } catch (final InterruptedException e) { -logger.debug("Test only: partitioner thread is interrupted in test countdown latch await()", e); - } - - final Thread currThread = Thread.currentThread(); - final String currThreadName = currThread.getName(); - final OperatorStats localStats = part.getStats(); - try { -final String newThreadName = parentThreadName + currThread.getId(); -currThread.setName(newThreadName); + final Thread thread = Thread.currentThread(); + if (runner.compareAndSet(null, thread)) { +final String name = thread.getName(); +thread.setName(String.format("Partitioner-%s-%d", partitionerDecorator.thread.getName(), thread.getId())); +final OperatorStats localStats = partitioner.getStats(); localStats.clear(); localStats.startProcessing(); -iface.execute(part); - } catch (IOException e) { -exp = e; - } finally { -localStats.stopProcessing(); -currThread.setName(currThreadName); -latch.countDown(); +ExecutionException executionException = null; +try { + // Test only - Pause until interrupted by fragment thread + testCountDownLatch.await(); + if (state.get() == STATE.NEW) { +iface.execute(partitioner); + } +} catch (InterruptedException e) { + if (state.compareAndSet(STATE.NEW, STATE.INTERRUPTED)) { +logger.warn("Partitioner Task interrupted during the run", e); + } +} catch (Throwable t) { + executionException = new ExecutionException(t); +} finally { + if (state.compareAndSet(STATE.NEW, STATE.COMPLETING)) { +if (executionException == null) { + localStats.stopProcessing(); + state.lazySet(STATE.NORMAL); +} else { +
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181894189 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerTemplate.java --- @@ -161,8 +161,11 @@ public OperatorStats getStats() { * @param schemaChanged true if the schema has changed */ @Override - public void flushOutgoingBatches(boolean isLastBatch, boolean schemaChanged) throws IOException { + public void flushOutgoingBatches(boolean isLastBatch, boolean schemaChanged) throws IOException, InterruptedException { for (OutgoingRecordBatch batch : outgoingBatches) { + if (Thread.interrupted()) { +throw new InterruptedException(); --- End diff -- Since we are checking for interrupts here already could we remove `Thread.currentThread().isInterrupted()` in the flush(boolean schemaChanged) method? ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181871311 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -118,105 +127,114 @@ public PartitionOutgoingBatch getOutgoingBatches(int index) { return null; } - @VisibleForTesting - protected List getPartitioners() { + List getPartitioners() { return partitioners; } /** * Helper to execute the different methods wrapped into same logic * @param iface - * @throws IOException + * @throws ExecutionException */ - protected void executeMethodLogic(final GeneralExecuteIface iface) throws IOException { -if (partitioners.size() == 1 ) { - // no need for threads - final OperatorStats localStatsSingle = partitioners.get(0).getStats(); - localStatsSingle.clear(); - localStatsSingle.startProcessing(); + @VisibleForTesting + void executeMethodLogic(final GeneralExecuteIface iface) throws ExecutionException { +// To simulate interruption of main fragment thread and interrupting the partition threads, create a +// CountDownInject latch. Partitioner threads await on the latch and main fragment thread counts down or +// interrupts waiting threads. This makes sure that we are actually interrupting the blocked partitioner threads. +try (CountDownLatchInjection testCountDownLatch = injector.getLatch(context.getExecutionControls(), "partitioner-sender-latch")) { --- End diff -- I see thx. ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181858695 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -118,105 +127,114 @@ public PartitionOutgoingBatch getOutgoingBatches(int index) { return null; } - @VisibleForTesting - protected List getPartitioners() { + List getPartitioners() { return partitioners; } /** * Helper to execute the different methods wrapped into same logic * @param iface - * @throws IOException + * @throws ExecutionException */ - protected void executeMethodLogic(final GeneralExecuteIface iface) throws IOException { -if (partitioners.size() == 1 ) { - // no need for threads - final OperatorStats localStatsSingle = partitioners.get(0).getStats(); - localStatsSingle.clear(); - localStatsSingle.startProcessing(); + @VisibleForTesting + void executeMethodLogic(final GeneralExecuteIface iface) throws ExecutionException { +// To simulate interruption of main fragment thread and interrupting the partition threads, create a +// CountDownInject latch. Partitioner threads await on the latch and main fragment thread counts down or +// interrupts waiting threads. This makes sure that we are actually interrupting the blocked partitioner threads. +try (CountDownLatchInjection testCountDownLatch = injector.getLatch(context.getExecutionControls(), "partitioner-sender-latch")) { --- End diff -- I'm not sure that we should be using the injector to create a count down latch here. My understanding is that we have to define a `partitioner-sender-latch` injection site on the `"drill.exec.testing.controls"` property and it is intended only for testing. See ControlsInjectionUtil.createLatch(). The default value for `drill.exec.testing.controls` is empty so the getLatch method would return a Noop latch since `partitioner-sender-latch` is undefined. Since we always want to create a count down latch here (not just for testing) shouldn't we directly create one? ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181851002 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -262,68 +280,124 @@ public FlushBatchesHandlingClass(boolean isLastBatch, boolean schemaChanged) { } @Override -public void execute(Partitioner part) throws IOException { +public void execute(Partitioner part) throws IOException, InterruptedException { part.flushOutgoingBatches(isLastBatch, schemaChanged); } } /** - * Helper class to wrap Runnable with customized naming - * Exception handling + * Helper class to wrap Runnable with cancellation and waiting for completion support * */ - private static class CustomRunnable implements Runnable { + private static class PartitionerTask implements Runnable { + +private enum STATE { + NEW, + COMPLETING, + NORMAL, + EXCEPTIONAL, + CANCELLED, + INTERRUPTING, + INTERRUPTED +} + +private final AtomicReference state; +private final AtomicReference runner; +private final PartitionerDecorator partitionerDecorator; +private final AtomicInteger count; -private final String parentThreadName; -private final CountDownLatch latch; private final GeneralExecuteIface iface; -private final Partitioner part; +private final Partitioner partitioner; private CountDownLatchInjection testCountDownLatch; -private volatile IOException exp; +private volatile ExecutionException exception; -public CustomRunnable(final String parentThreadName, final CountDownLatch latch, final GeneralExecuteIface iface, -final Partitioner part, CountDownLatchInjection testCountDownLatch) { - this.parentThreadName = parentThreadName; - this.latch = latch; +public PartitionerTask(PartitionerDecorator partitionerDecorator, GeneralExecuteIface iface, Partitioner partitioner, AtomicInteger count, CountDownLatchInjection testCountDownLatch) { + state = new AtomicReference<>(STATE.NEW); + runner = new AtomicReference<>(); + this.partitionerDecorator = partitionerDecorator; this.iface = iface; - this.part = part; + this.partitioner = partitioner; + this.count = count; this.testCountDownLatch = testCountDownLatch; } @Override public void run() { - // Test only - Pause until interrupted by fragment thread + final Thread thread = Thread.currentThread(); + Preconditions.checkState(runner.compareAndSet(null, thread), + "PartitionerTask can be executed only once."); + if (state.get() == STATE.NEW) { +final String name = thread.getName(); +thread.setName(String.format("Partitioner-%s-%d", partitionerDecorator.thread.getName(), thread.getId())); +final OperatorStats localStats = partitioner.getStats(); +localStats.clear(); +localStats.startProcessing(); +ExecutionException executionException = null; +try { + // Test only - Pause until interrupted by fragment thread + testCountDownLatch.await(); + iface.execute(partitioner); +} catch (InterruptedException e) { + if (state.compareAndSet(STATE.NEW, STATE.INTERRUPTED)) { +logger.warn("Partitioner Task interrupted during the run", e); + } +} catch (Throwable t) { + executionException = new ExecutionException(t); +} +if (state.compareAndSet(STATE.NEW, STATE.COMPLETING)) { + if (executionException == null) { +localStats.stopProcessing(); +state.lazySet(STATE.NORMAL); + } else { +exception = executionException; +state.lazySet(STATE.EXCEPTIONAL); + } +} +if (count.decrementAndGet() == 0) { + LockSupport.unpark(partitionerDecorator.thread); +} +thread.setName(name); + } + runner.set(null); + while (state.get() == STATE.INTERRUPTING) { +Thread.yield(); + } + // Clear interrupt flag try { -testCountDownLatch.await(); - } catch (final InterruptedException e) { -logger.debug("Test only: partitioner thread is interrupted in test countdown latch await()", e); +Thread.sleep(0); --- End diff -- Could we use Thread.interrupted() inste
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 @arina-ielchiieva Please let me know if you have any comments or if things look good. ---
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 @vrozov followed your instructions. Building the source distribution with license checks enabled works. ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181488592 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -262,68 +280,122 @@ public FlushBatchesHandlingClass(boolean isLastBatch, boolean schemaChanged) { } @Override -public void execute(Partitioner part) throws IOException { +public void execute(Partitioner part) throws IOException, InterruptedException { part.flushOutgoingBatches(isLastBatch, schemaChanged); } } /** - * Helper class to wrap Runnable with customized naming - * Exception handling + * Helper class to wrap Runnable with cancellation and waiting for completion support * */ - private static class CustomRunnable implements Runnable { + private static class PartitionerTask implements Runnable { + +private enum STATE { + NEW, + COMPLETING, + NORMAL, + EXCEPTIONAL, + CANCELLED, + INTERRUPTING, + INTERRUPTED +} + +private final AtomicReference state; +private final AtomicReference runner; +private final PartitionerDecorator partitionerDecorator; +private final AtomicInteger count; -private final String parentThreadName; -private final CountDownLatch latch; private final GeneralExecuteIface iface; -private final Partitioner part; +private final Partitioner partitioner; private CountDownLatchInjection testCountDownLatch; -private volatile IOException exp; +private volatile ExecutionException exception; -public CustomRunnable(final String parentThreadName, final CountDownLatch latch, final GeneralExecuteIface iface, -final Partitioner part, CountDownLatchInjection testCountDownLatch) { - this.parentThreadName = parentThreadName; - this.latch = latch; +public PartitionerTask(PartitionerDecorator partitionerDecorator, GeneralExecuteIface iface, Partitioner partitioner, AtomicInteger count, CountDownLatchInjection testCountDownLatch) { + state = new AtomicReference<>(STATE.NEW); + runner = new AtomicReference<>(); + this.partitionerDecorator = partitionerDecorator; this.iface = iface; - this.part = part; + this.partitioner = partitioner; + this.count = count; this.testCountDownLatch = testCountDownLatch; } @Override public void run() { - // Test only - Pause until interrupted by fragment thread - try { -testCountDownLatch.await(); - } catch (final InterruptedException e) { -logger.debug("Test only: partitioner thread is interrupted in test countdown latch await()", e); - } - - final Thread currThread = Thread.currentThread(); - final String currThreadName = currThread.getName(); - final OperatorStats localStats = part.getStats(); - try { -final String newThreadName = parentThreadName + currThread.getId(); -currThread.setName(newThreadName); + final Thread thread = Thread.currentThread(); + Preconditions.checkState(runner.compareAndSet(null, thread), + "PartitionerTask can be executed only once."); + if (state.get() == STATE.NEW) { +final String name = thread.getName(); +thread.setName(String.format("Partitioner-%s-%d", partitionerDecorator.thread.getName(), thread.getId())); +final OperatorStats localStats = partitioner.getStats(); localStats.clear(); localStats.startProcessing(); -iface.execute(part); - } catch (IOException e) { -exp = e; - } finally { -localStats.stopProcessing(); -currThread.setName(currThreadName); -latch.countDown(); +ExecutionException executionException = null; +try { + // Test only - Pause until interrupted by fragment thread + testCountDownLatch.await(); + iface.execute(partitioner); +} catch (InterruptedException e) { + if (state.compareAndSet(STATE.NEW, STATE.INTERRUPTED)) { +logger.warn("Partitioner Task interrupted during the run", e); + } +} catch (Throwable t) { + executionException = new ExecutionException(t); +} +if (state.compareAndSet(STATE.NEW, STATE.COMPLETING)) { + if (executionException == null) { +localStats.stopProcessing(); +state.lazySet(STATE.NORMAL)
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 @vrozov @arina-ielchiieva Addressed comments. ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181475030 --- Diff: pom.xml --- @@ -375,12 +449,12 @@ - com.mycila license-maven-plugin 3.0 + ${license.skip} --- End diff -- updated with suggestion above ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181474959 --- Diff: pom.xml --- @@ -198,6 +200,78 @@ + +org.apache.rat +apache-rat-plugin +0.12 + + +rat-checks +validate + + check + + + + + ${rat.skip} --- End diff -- Your suggestion works. Made the change. ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181468772 --- Diff: pom.xml --- @@ -198,6 +200,78 @@ + +org.apache.rat +apache-rat-plugin +0.12 + + --- End diff -- You are right, the docs say it is bound to the validate phase by default http://creadur.apache.org/rat/apache-rat-plugin/check-mojo.html . I also removed `validate` and tested that it work correctly. ---
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 @arina-ielchiieva fixed ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181265596 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -262,68 +279,120 @@ public FlushBatchesHandlingClass(boolean isLastBatch, boolean schemaChanged) { } @Override -public void execute(Partitioner part) throws IOException { +public void execute(Partitioner part) throws IOException, InterruptedException { part.flushOutgoingBatches(isLastBatch, schemaChanged); } } /** - * Helper class to wrap Runnable with customized naming - * Exception handling + * Helper class to wrap Runnable with cancellation and waiting for completion support * */ - private static class CustomRunnable implements Runnable { + private static class PartitionerTask implements Runnable { + +private enum STATE { + NEW, + COMPLETING, + NORMAL, + EXCEPTIONAL, + CANCELLED, + INTERRUPTING, + INTERRUPTED +} + +private final AtomicReference state; +private final AtomicReference runner; +private final PartitionerDecorator partitionerDecorator; +private final AtomicInteger count; -private final String parentThreadName; -private final CountDownLatch latch; private final GeneralExecuteIface iface; -private final Partitioner part; +private final Partitioner partitioner; private CountDownLatchInjection testCountDownLatch; private volatile IOException exp; -public CustomRunnable(final String parentThreadName, final CountDownLatch latch, final GeneralExecuteIface iface, -final Partitioner part, CountDownLatchInjection testCountDownLatch) { - this.parentThreadName = parentThreadName; - this.latch = latch; +public PartitionerTask(PartitionerDecorator partitionerDecorator, GeneralExecuteIface iface, Partitioner partitioner, AtomicInteger count, CountDownLatchInjection testCountDownLatch) { + state = new AtomicReference<>(STATE.NEW); + runner = new AtomicReference<>(); + this.partitionerDecorator = partitionerDecorator; this.iface = iface; - this.part = part; + this.partitioner = partitioner; + this.count = count; this.testCountDownLatch = testCountDownLatch; } @Override public void run() { - // Test only - Pause until interrupted by fragment thread - try { -testCountDownLatch.await(); - } catch (final InterruptedException e) { -logger.debug("Test only: partitioner thread is interrupted in test countdown latch await()", e); - } - - final Thread currThread = Thread.currentThread(); - final String currThreadName = currThread.getName(); - final OperatorStats localStats = part.getStats(); - try { -final String newThreadName = parentThreadName + currThread.getId(); -currThread.setName(newThreadName); + final Thread thread = Thread.currentThread(); + if (state.get() == STATE.NEW && runner.compareAndSet(null, thread)) { --- End diff -- I think there is a race condition here. Consider the following case: 1. A PartitionTask starts executing, let's call it **Task A** 2. The PartitionTask executes the state check `state.get() == STATE.NEW` and then execution stops temporarily. 3. The main PartitionDecorator thread executes await(count, partitionerTasks) 4. `context.getExecutorState().shouldContinue()` is false so the PartitionTasks are cancelled. 5. The cancel method is called for **Task A** 6. In the cancel method ` (state.compareAndSet(STATE.NEW, mayInterruptIfRunning ? STATE.INTERRUPTING : STATE.CANCELLED)` will return true 7. `Thread thread = runner.get();` is executed but it is null since **Task A** has not set the runner yet. 8. The else statement in the cancel method is executed and `((ThreadPoolExecutor)partitionerDecorator.executor).remove(this);` is called. 9. The remove method does not cancel **Task A** since it has already started executing, and the interrupt is not set so it continue running. 10. `count.decrementAndGet();` is executed so the count will be zero but **Task A** is still running. ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181262812 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -262,68 +279,120 @@ public FlushBatchesHandlingClass(boolean isLastBatch, boolean schemaChanged) { } @Override -public void execute(Partitioner part) throws IOException { +public void execute(Partitioner part) throws IOException, InterruptedException { part.flushOutgoingBatches(isLastBatch, schemaChanged); } } /** - * Helper class to wrap Runnable with customized naming - * Exception handling + * Helper class to wrap Runnable with cancellation and waiting for completion support * */ - private static class CustomRunnable implements Runnable { + private static class PartitionerTask implements Runnable { + +private enum STATE { + NEW, + COMPLETING, + NORMAL, + EXCEPTIONAL, + CANCELLED, + INTERRUPTING, + INTERRUPTED +} + +private final AtomicReference state; +private final AtomicReference runner; +private final PartitionerDecorator partitionerDecorator; +private final AtomicInteger count; -private final String parentThreadName; -private final CountDownLatch latch; private final GeneralExecuteIface iface; -private final Partitioner part; +private final Partitioner partitioner; private CountDownLatchInjection testCountDownLatch; private volatile IOException exp; -public CustomRunnable(final String parentThreadName, final CountDownLatch latch, final GeneralExecuteIface iface, -final Partitioner part, CountDownLatchInjection testCountDownLatch) { - this.parentThreadName = parentThreadName; - this.latch = latch; +public PartitionerTask(PartitionerDecorator partitionerDecorator, GeneralExecuteIface iface, Partitioner partitioner, AtomicInteger count, CountDownLatchInjection testCountDownLatch) { + state = new AtomicReference<>(STATE.NEW); + runner = new AtomicReference<>(); + this.partitionerDecorator = partitionerDecorator; this.iface = iface; - this.part = part; + this.partitioner = partitioner; + this.count = count; this.testCountDownLatch = testCountDownLatch; } @Override public void run() { - // Test only - Pause until interrupted by fragment thread - try { -testCountDownLatch.await(); - } catch (final InterruptedException e) { -logger.debug("Test only: partitioner thread is interrupted in test countdown latch await()", e); - } - - final Thread currThread = Thread.currentThread(); - final String currThreadName = currThread.getName(); - final OperatorStats localStats = part.getStats(); - try { -final String newThreadName = parentThreadName + currThread.getId(); -currThread.setName(newThreadName); + final Thread thread = Thread.currentThread(); + if (state.get() == STATE.NEW && runner.compareAndSet(null, thread)) { --- End diff -- Never mind I think I see why ---
[GitHub] drill pull request #1208: DRILL-6295: PartitionerDecorator may close partiti...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1208#discussion_r181260542 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionerDecorator.java --- @@ -262,68 +279,120 @@ public FlushBatchesHandlingClass(boolean isLastBatch, boolean schemaChanged) { } @Override -public void execute(Partitioner part) throws IOException { +public void execute(Partitioner part) throws IOException, InterruptedException { part.flushOutgoingBatches(isLastBatch, schemaChanged); } } /** - * Helper class to wrap Runnable with customized naming - * Exception handling + * Helper class to wrap Runnable with cancellation and waiting for completion support * */ - private static class CustomRunnable implements Runnable { + private static class PartitionerTask implements Runnable { + +private enum STATE { + NEW, + COMPLETING, + NORMAL, + EXCEPTIONAL, + CANCELLED, + INTERRUPTING, + INTERRUPTED +} + +private final AtomicReference state; +private final AtomicReference runner; +private final PartitionerDecorator partitionerDecorator; +private final AtomicInteger count; -private final String parentThreadName; -private final CountDownLatch latch; private final GeneralExecuteIface iface; -private final Partitioner part; +private final Partitioner partitioner; private CountDownLatchInjection testCountDownLatch; private volatile IOException exp; -public CustomRunnable(final String parentThreadName, final CountDownLatch latch, final GeneralExecuteIface iface, -final Partitioner part, CountDownLatchInjection testCountDownLatch) { - this.parentThreadName = parentThreadName; - this.latch = latch; +public PartitionerTask(PartitionerDecorator partitionerDecorator, GeneralExecuteIface iface, Partitioner partitioner, AtomicInteger count, CountDownLatchInjection testCountDownLatch) { + state = new AtomicReference<>(STATE.NEW); + runner = new AtomicReference<>(); + this.partitionerDecorator = partitionerDecorator; this.iface = iface; - this.part = part; + this.partitioner = partitioner; + this.count = count; this.testCountDownLatch = testCountDownLatch; } @Override public void run() { - // Test only - Pause until interrupted by fragment thread - try { -testCountDownLatch.await(); - } catch (final InterruptedException e) { -logger.debug("Test only: partitioner thread is interrupted in test countdown latch await()", e); - } - - final Thread currThread = Thread.currentThread(); - final String currThreadName = currThread.getName(); - final OperatorStats localStats = part.getStats(); - try { -final String newThreadName = parentThreadName + currThread.getId(); -currThread.setName(newThreadName); + final Thread thread = Thread.currentThread(); + if (state.get() == STATE.NEW && runner.compareAndSet(null, thread)) { --- End diff -- Why is this check necessary? ---
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 @arina-ielchiieva @vrozov I'll split the change into two commits. One with a change to the pom files and install.md . Another attributed to `dev@drill.apache.org` with all the formatting changes. ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181249189 --- Diff: pom.xml --- @@ -198,6 +200,78 @@ + +org.apache.rat +apache-rat-plugin +0.12 + + --- End diff -- This was here originally. I tried removing it. When it is removed the rat plugin is not executed when doing `mvn clean install`. So it seems to be required. ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181249226 --- Diff: pom.xml --- @@ -198,6 +200,78 @@ + +org.apache.rat +apache-rat-plugin +0.12 --- End diff -- fixed ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181248921 --- Diff: pom.xml --- @@ -198,6 +200,78 @@ + +org.apache.rat +apache-rat-plugin +0.12 + + +rat-checks +validate + + check + + + + + ${rat.skip} --- End diff -- I originally tried `true` . However, this does not allow the skip to be overridden when the `-Drat.skip=false` argument is provided. It seems the only way to have an overridable default is to explicitly define a property with the default value. ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1207#discussion_r181248949 --- Diff: pom.xml --- @@ -375,12 +449,12 @@ - com.mycila license-maven-plugin 3.0 + ${license.skip} --- End diff -- see comment above. ---
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 @arina-ielchiieva Updated INSTALL.md @vrozov Added the rat plugin back. Disabled license checking for both rat and the license-maven-plugin by default and enabled it for travis ---
[GitHub] drill issue #1207: DRILL-6320: Fixed License Headers
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1207 Abhishek has already updated jenkins to the latest maven. So we should be clear to merge after review is done. ---
[GitHub] drill pull request #1207: DRILL-6320: Fixed License Headers
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1207 DRILL-6320: Fixed License Headers There were several issues with license headers before this change: 1. We used the apache-rat-plugin to check licenses which was not thread safe. So when doing concurrent builds of submodules with `mvn -T 2C clean install` it could fail. 2. The apache-rat-plugin did not check all files for licenses and did not provide a warning for the files it skipped. 3. It allows inconsistent formatting of license headers. 4. It allowed license headers to be in java doc comments. This is problematic since we don't want the license to be included in every documentation page when we decide to publish java docs. Further more rat provided no way to configure the comment style allowed, so we were forced to live with this. All of these issues were fixed by moving to the license-maven-plugin. I have reformatted all the license headers and manually verified the headers on each file to make sure they are correct. Now every file in the project is checked for a license except for some shell scripts that do not have file extensions in their names. Also none of the license headers are in java doc comments now You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6320 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1207.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1207 commit 066fc4b58a3de4237fab4a597c67c948882fdcd2 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-11T00:27:45Z DRILL-6320: Switched from apache-rate-plugin to license-maven-plugin and disallowed javadoc comments for license headers. commit 5fcea57acd47e6d1acc956517c1d3499f84729b1 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-11T00:45:49Z - More fixes commit d8704eea73294d5beab3b470e51f576e3c7f977f Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-11T00:49:25Z - Even more fixes commit 6c4ba191ed9fc84d32ea72ecd6fbf3972ba1028d Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-11T19:32:57Z - fixed more licenses commit 7274190b7314265a0d5ed293374069ca43b5611a Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-11T19:36:00Z - fix new line commit 6d7227e3b7df66120ad1ca698986c0e49eb77b96 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-04-11T19:45:15Z - Put proper license headers in ftl files. ---
[GitHub] drill pull request #1202: DRILL-6311: No logging information in drillbit.log...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1202#discussion_r180252832 --- Diff: exec/vector/pom.xml --- @@ -69,6 +69,7 @@ org.apache.drill drill-common ${project.version} + test --- End diff -- @vrozov As we discussed offline drill-common-tests is not a dependency of the vector project itself, it is a dependency required by a maven invocation when a category is specified. For example in `mvn test -DexcludedGroups="org.apache.drill.categories.SlowTest,org.apache.drill.categories.UnlikelyTest,org.apache.drill.categories.SecurityTest"` the category classes are required by maven. However when `mvn test` is used the category classes are not required by maven. ---
[GitHub] drill pull request #1202: DRILL-6311: No logging information in drillbit.log...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1202#discussion_r180247017 --- Diff: exec/vector/pom.xml --- @@ -69,6 +69,7 @@ org.apache.drill drill-common ${project.version} + test --- End diff -- The `tests` tag includes the test classes in drill-common. The test category classes are included in common/src/test/java/org/apache/drill/categories. So test category classes will be accessible to maven. However, looks like I forgot the test scope so the test classes and resources from drill-common were being included in Drill's standard (non-test) classpath. ---
[GitHub] drill pull request #1202: DRILL-6311: No logging information in drillbit.log...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1202#discussion_r180205787 --- Diff: exec/vector/pom.xml --- @@ -69,6 +69,7 @@ org.apache.drill drill-common ${project.version} + test --- End diff -- @vrozov JUnit supports including or excluding tests based on categories. Categories are just marker interfaces that are passed to the `@Category` annotation. For example a test class annotated with `@Category(SlowTest.Class)` will be excluded from a Travis run since Travis excludes the SlowTest category. All the test categories needed to be included in a common artifact since all the drill submodules require the test categories. So the test categories were added to drill-common. If drill-common is not included mvn would fail with an error when running tests marked with `@Category(SlowTest.class)` since JUnit would not be able to find the SlowTest class. ---
[GitHub] drill pull request #1178: DRILL-6278: Removed temp codegen directory in test...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1178#discussion_r176947145 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/TopN/TopNBatchTest.java --- @@ -159,7 +157,6 @@ public void priorityQueueOrderingTest() throws Exception { @Test public void sortOneKeyAscending() throws Throwable { ClusterFixtureBuilder builder = ClusterFixture.builder(dirTestWatcher) - .configProperty(ClassBuilder.CODE_DIR_OPTION, dirTestWatcher.getDir().getAbsolutePath()) .configProperty(CodeCompiler.ENABLE_SAVE_CODE_FOR_DEBUG_TOPN, true); --- End diff -- Thanks for catching. Removed it. ---
[GitHub] drill issue #1178: DRILL-6278: Removed temp codegen directory in testing fra...
Github user ilooner-mapr commented on the issue: https://github.com/apache/drill/pull/1178 @vvysotskyi thanks for catching this. I've removed it from TopNBatchTest . ---
[GitHub] drill pull request #1178: DRILL-6278: Removed temp codegen directory in test...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1178 DRILL-6278: Removed temp codegen directory in testing framework. @vvysotskyi Please review You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6278 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1178.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1178 commit bde89d68b570eaa8792baa2517fab3c9765c28a4 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-03-21T06:00:22Z DRILL-6278: Removed temp codegen directory in testing framework. ---
[GitHub] drill issue #1176: DRILL-6275: Fixed direct memory reporting in sys.memory.
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1176 Tested fix manually on my laptop. ---
[GitHub] drill pull request #1176: DRILL-6275: Fixed direct memory reporting in sys.m...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1176 DRILL-6275: Fixed direct memory reporting in sys.memory. @kkhatua Thanks for pinpointing the root cause! Please review. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6275 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1176.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1176 commit 7f65b7d4b4b9e42dc3597ac9758c39c6ce0903b7 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-03-19T20:16:37Z DRILL-6275: Fixed direct memory reporting in sys.memory. ---
[GitHub] drill issue #1165: DRILL-6239: Add build and license badges to README.md
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1165 @vrozov Fixed bad link and added maven badge. Currently the maven badge points to drill's distribution pom. It's also interesting to note that only some of drill's artifacts are published on maven central. See https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.drill%22 ---
[GitHub] drill pull request #1165: DRILL-6239: Add build and license badges to README...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1165#discussion_r174570587 --- Diff: README.md --- @@ -1,5 +1,8 @@ # Apache Drill +[![Build Status](https://travis-ci.org/apache/incubator-openwhisk.svg?branch=master)](https://travis-ci.org/apache/drill) --- End diff -- Oh carp! Thanks for catching this. ---
[GitHub] drill issue #1164: DRILL-6234: Improved documentation for VariableWidthVecto...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1164 @paul-rogers Applied review comments. ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174350127 --- Diff: exec/vector/src/test/java/org/apache/drill/exec/vector/VariableLengthVectorTest.java --- @@ -0,0 +1,152 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill.exec.vector; + +import org.apache.drill.common.types.TypeProtos; +import org.apache.drill.common.types.Types; +import org.apache.drill.exec.memory.RootAllocator; +import org.apache.drill.exec.record.MaterializedField; +import org.junit.Assert; +import org.junit.Test; + +/** + * This test uses {@link VarCharVector} to test the template code in VariableLengthVector. + */ +public class VariableLengthVectorTest +{ + /** + * If the vector contains 1000 records, setting a value count of 1000 should work. + */ + @Test + public void testSettingSameValueCount() + { +try (RootAllocator allocator = new RootAllocator(10_000_000)) { + final MaterializedField field = MaterializedField.create("stringCol", Types.required(TypeProtos.MinorType.VARCHAR)); + final VarCharVector vector = new VarCharVector(field, allocator); + + vector.allocateNew(); + + try { +final int size = 1000; +final VarCharVector.Mutator mutator = vector.getMutator(); +final VarCharVector.Accessor accessor = vector.getAccessor(); + +setSafeIndexStrings("", 0, size, mutator); + +mutator.setValueCount(size); +Assert.assertEquals(size, accessor.getValueCount()); +checkIndexStrings("", 0, size, accessor); + } finally { +vector.clear(); + } +} + } + + /** + * Test trunicating data. If you have 1 records, reduce the vector to 1000 records. + */ + @Test + public void testTrunicateVectorSetValueCount() + { +try (RootAllocator allocator = new RootAllocator(10_000_000)) { + final MaterializedField field = MaterializedField.create("stringCol", Types.required(TypeProtos.MinorType.VARCHAR)); + final VarCharVector vector = new VarCharVector(field, allocator); + + vector.allocateNew(); + + try { +final int size = 1000; +final int fluffSize = 1; +final VarCharVector.Mutator mutator = vector.getMutator(); +final VarCharVector.Accessor accessor = vector.getAccessor(); + +setSafeIndexStrings("", 0, size, mutator); +setSafeIndexStrings("first cut ", size, fluffSize, mutator); + +mutator.setValueCount(fluffSize); +Assert.assertEquals(fluffSize, accessor.getValueCount()); + +mutator.setValueCount(size); +Assert.assertEquals(size, accessor.getValueCount()); +setSafeIndexStrings("redone cut ", size, fluffSize, mutator); --- End diff -- Yikes! I didn't know this. Thanks for catching. ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174349986 --- Diff: exec/vector/src/main/codegen/templates/VariableLengthVectors.java --- @@ -514,6 +516,22 @@ public boolean isNull(int index){ * The equivalent Java primitive is '${minor.javaType!type.javaType}' * * NB: this class is automatically generated from ValueVectorTypes.tdd using FreeMarker. + * + * Contract --- End diff -- This is a good way to layout the information. I switched the javadoc to follow this outline. ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174349888 --- Diff: exec/vector/src/main/codegen/templates/VariableLengthVectors.java --- @@ -514,6 +516,22 @@ public boolean isNull(int index){ * The equivalent Java primitive is '${minor.javaType!type.javaType}' * * NB: this class is automatically generated from ValueVectorTypes.tdd using FreeMarker. + * + * Contract + * + * Variable length vectors do not support random writes. All set methods must be called for with a monotonically increasing consecutive sequence of indexes. + * It is possible to trim the vector by setting the value count to be less than the number of values currently container in the vector with {@link #setValueCount(int)}, then + * the process of setting values starts with the index after the last index. + * + * + * It is also possible to back track and set the value at an index earlier than the current index, however, the caller must assume that all data container after the last + * set index is corrupted. + * + * Notes + * + * There is no gaurantee the data buffer for the {@link VariableWidthVector} will have enough space to contain the data you set unless you use setSafe. If you + * use set you may get array index out of bounds exceptions. --- End diff -- Liked this refactored phrasing ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174349801 --- Diff: exec/vector/src/main/codegen/templates/VariableLengthVectors.java --- @@ -514,6 +516,22 @@ public boolean isNull(int index){ * The equivalent Java primitive is '${minor.javaType!type.javaType}' * * NB: this class is automatically generated from ValueVectorTypes.tdd using FreeMarker. + * + * Contract + * + * Variable length vectors do not support random writes. All set methods must be called for with a monotonically increasing consecutive sequence of indexes. + * It is possible to trim the vector by setting the value count to be less than the number of values currently container in the vector with {@link #setValueCount(int)}, then + * the process of setting values starts with the index after the last index. + * + * + * It is also possible to back track and set the value at an index earlier than the current index, however, the caller must assume that all data container after the last --- End diff -- Updated ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174349826 --- Diff: exec/vector/src/main/codegen/templates/VariableLengthVectors.java --- @@ -514,6 +516,22 @@ public boolean isNull(int index){ * The equivalent Java primitive is '${minor.javaType!type.javaType}' * * NB: this class is automatically generated from ValueVectorTypes.tdd using FreeMarker. + * + * Contract + * + * Variable length vectors do not support random writes. All set methods must be called for with a monotonically increasing consecutive sequence of indexes. + * It is possible to trim the vector by setting the value count to be less than the number of values currently container in the vector with {@link #setValueCount(int)}, then + * the process of setting values starts with the index after the last index. + * + * + * It is also possible to back track and set the value at an index earlier than the current index, however, the caller must assume that all data container after the last + * set index is corrupted. --- End diff -- Added ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174349707 --- Diff: exec/vector/src/main/codegen/templates/VariableLengthVectors.java --- @@ -506,6 +506,8 @@ public boolean isNull(int index){ } /** + * Overview --- End diff -- Fixed ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174348633 --- Diff: exec/vector/src/test/java/org/apache/drill/exec/vector/VariableLengthVectorTest.java --- @@ -0,0 +1,152 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.drill.exec.vector; + +import org.apache.drill.common.types.TypeProtos; +import org.apache.drill.common.types.Types; +import org.apache.drill.exec.memory.RootAllocator; +import org.apache.drill.exec.record.MaterializedField; +import org.junit.Assert; +import org.junit.Test; + +/** + * This test uses {@link VarCharVector} to test the template code in VariableLengthVector. + */ +public class VariableLengthVectorTest +{ + /** + * If the vector contains 1000 records, setting a value count of 1000 should work. + */ + @Test + public void testSettingSameValueCount() + { +try (RootAllocator allocator = new RootAllocator(10_000_000)) { + final MaterializedField field = MaterializedField.create("stringCol", Types.required(TypeProtos.MinorType.VARCHAR)); + final VarCharVector vector = new VarCharVector(field, allocator); + + vector.allocateNew(); + + try { +final int size = 1000; +final VarCharVector.Mutator mutator = vector.getMutator(); +final VarCharVector.Accessor accessor = vector.getAccessor(); + +setSafeIndexStrings("", 0, size, mutator); + +mutator.setValueCount(size); +Assert.assertEquals(size, accessor.getValueCount()); +checkIndexStrings("", 0, size, accessor); + } finally { +vector.clear(); + } +} + } + + /** + * Test trunicating data. If you have 1 records, reduce the vector to 1000 records. + */ + @Test + public void testTrunicateVectorSetValueCount() + { +try (RootAllocator allocator = new RootAllocator(10_000_000)) { + final MaterializedField field = MaterializedField.create("stringCol", Types.required(TypeProtos.MinorType.VARCHAR)); + final VarCharVector vector = new VarCharVector(field, allocator); + + vector.allocateNew(); + + try { +final int size = 1000; +final int fluffSize = 1; +final VarCharVector.Mutator mutator = vector.getMutator(); +final VarCharVector.Accessor accessor = vector.getAccessor(); + +setSafeIndexStrings("", 0, size, mutator); +setSafeIndexStrings("first cut ", size, fluffSize, mutator); + +mutator.setValueCount(fluffSize); +Assert.assertEquals(fluffSize, accessor.getValueCount()); + +mutator.setValueCount(size); +Assert.assertEquals(size, accessor.getValueCount()); +setSafeIndexStrings("redone cut ", size, fluffSize, mutator); +mutator.setValueCount(fluffSize); +Assert.assertEquals(fluffSize, accessor.getValueCount()); + +mutator.setValueCount(size); +Assert.assertEquals(size, accessor.getValueCount()); + +checkIndexStrings("", 0, size, accessor); + + } finally { +vector.clear(); + } +} + } + + /** + * Set 1 values. Then go back and set new values starting at the 1001 the record. --- End diff -- I agree the vector writers should be used. The reason why I was looking into this is that I saw strange behavior in the legacy HashTable where setValueCount was being called with a larger valueCount than there was data in a VarCharVector. I did an ugly (and now I think incorrect work around) for the issue and set about to make setValueCount support setting a valueCount larger than the number elements in the VarCharVector. Now I am realizing the issue is with the HashTableTemplate
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1164#discussion_r174343151 --- Diff: exec/vector/src/main/codegen/templates/VariableLengthVectors.java --- @@ -514,6 +516,22 @@ public boolean isNull(int index){ * The equivalent Java primitive is '${minor.javaType!type.javaType}' * * NB: this class is automatically generated from ValueVectorTypes.tdd using FreeMarker. + * + * Contract + * + * Variable length vectors do not support random writes. All set methods must be called for with a monotonically increasing consecutive sequence of indexes. --- End diff -- Thanks for bringing this up. I'm sharing a design doc on the dev list tomorrow or the day after about how I plan to refactor HashAgg. It will cover how to facilitate unit tests and how to change the memory handling to use a deterministic calculator like the SortMemoryManager and soon to be introduced HashJoinMemoryCalculator (instead of catch OOMs). Perhaps you could comment on the doc about how to set ourselves up to fix this case. ---
[GitHub] drill issue #1165: DRILL-6239: Add build and license badges to README.md
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1165 @arina-ielchiieva ---
[GitHub] drill pull request #1165: DRILL-6239: Add build and license badges to README...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1165 DRILL-6239: Add build and license badges to README.md Add nice build and license badges that everyone else has. See a preview of what they look like here: https://github.com/ilooner/drill/tree/DRILL-6239 You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6239 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1165.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1165 commit c95030076a884b074821e169c4e58c84d89275c8 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-03-14T00:39:50Z DRILL-6239: Add build and license badges to README.md ---
[GitHub] drill issue #1164: DRILL-6234: Improved documentation for VariableWidthVecto...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1164 @paul-rogers ---
[GitHub] drill pull request #1164: DRILL-6234: Improved documentation for VariableWid...
GitHub user ilooner opened a pull request: https://github.com/apache/drill/pull/1164 DRILL-6234: Improved documentation for VariableWidthVector mutators I had some confusion about how setValueCount should behave for variable width vectors. I added some documentation and unit tests which explain its behavior so that others don't waste time in the future. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ilooner/drill DRILL-6234 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/1164.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1164 commit 4f13fd2873a9b510a9d0105ad87f72792aa46314 Author: Timothy Farkas <timothyfarkas@...> Date: 2018-03-14T00:24:28Z DRILL-6234: Improved documentation for VariableWidthVector mutators, and added simple unit tests demonstrating mutator behavior. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is finish...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 Squashed commits. @arina-ielchiieva please let me know if you have any comments. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173543287 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -245,19 +306,17 @@ public Void run() throws Exception { // we have a heap out of memory error. The JVM in unstable, exit. CatastrophicFailure.exit(e, "Unable to handle out of memory condition in FragmentExecutor.", -2); } +} catch (InterruptedException e) { + // Swallow interrupted exceptions since we intentionally interrupt the root when cancelling a query + logger.trace("Interruped root: {}", e); } catch (Throwable t) { fail(t); } finally { - // no longer allow this thread to be interrupted. We synchronize here to make sure that cancel can't set an - // interruption after we have moved beyond this block. - synchronized (myThreadRef) { -myThreadRef.set(null); -Thread.interrupted(); - } - - // Make sure the event processor is started at least once - eventProcessor.start(); + // Don't process any more termination requests, we are done. + eventProcessor.terminate(); --- End diff -- There is a corner case. If we didn't include eventProcessor.terminate() we could theoretically receive a cancellation request for the first time after the interrupts were cleared for the FragmentExecutor#run thread. The cancellation would then interrupt the Thread again, and our FragmentExecutor would finish and leave the thread it used in an interrupted state. This could cause problems for the next FragmentExecutor that uses the same thread. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173537487 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -488,47 +548,74 @@ void receiverFinished(FragmentHandle handle) { sendEvent(new FragmentEvent(EventType.RECEIVER_FINISHED, handle)); } +/** + * Tell the {@link FragmentEventProcessor} not to process anymore events. This keeps stray cancellation requests + * from being processed after the root has finished running and interrupts in the root thread have been cleared. + */ +public void terminate() { + terminate.set(true); +} + @Override protected void processEvent(FragmentEvent event) { + if (event.type.equals(EventType.RECEIVER_FINISHED)) { +// Finish request +if (terminate.get()) { + // We have already recieved a cancellation or we have terminated the event processor. Do not process anymore finish requests. + return; +} + } else { +// Cancel request +if (!terminate.compareAndSet(false, true)) { + // We have already received a cancellation or we have terminated the event processor. Do not process anymore cancellation requests. + // This prevents the root thread from being interrupted at an inappropriate time. + return; +} + } + switch (event.type) { case CANCEL: - /* - * We set the cancel requested flag but the actual cancellation is managed by the run() loop, if called. - */ + // We set the cancel requested flag but the actual cancellation is managed by the run() loop, if called. updateState(FragmentState.CANCELLATION_REQUESTED); - - /* - * Interrupt the thread so that it exits from any blocking operation it could be executing currently. We - * synchronize here to ensure we don't accidentally create a race condition where we interrupt the close out - * procedure of the main thread. - */ - synchronized (myThreadRef) { -final Thread myThread = myThreadRef.get(); -if (myThread != null) { - logger.debug("Interrupting fragment thread {}", myThread.getName()); - myThread.interrupt(); -} - } + // The root was started so we have to interrupt it in case it is performing a blocking operation. + killThread(); + terminate.set(true); break; - case CANCEL_AND_FINISH: + // In this case the root was never started so we do not have to interrupt the thread. updateState(FragmentState.CANCELLATION_REQUESTED); + // The FragmentExecutor#run() loop will not execute in this case so we have to cleanup resources here cleanup(FragmentState.FINISHED); + terminate.set(true); --- End diff -- Thanks for catching ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173537455 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -488,47 +548,74 @@ void receiverFinished(FragmentHandle handle) { sendEvent(new FragmentEvent(EventType.RECEIVER_FINISHED, handle)); } +/** + * Tell the {@link FragmentEventProcessor} not to process anymore events. This keeps stray cancellation requests + * from being processed after the root has finished running and interrupts in the root thread have been cleared. + */ +public void terminate() { + terminate.set(true); +} + @Override protected void processEvent(FragmentEvent event) { + if (event.type.equals(EventType.RECEIVER_FINISHED)) { +// Finish request +if (terminate.get()) { + // We have already recieved a cancellation or we have terminated the event processor. Do not process anymore finish requests. + return; +} + } else { +// Cancel request +if (!terminate.compareAndSet(false, true)) { + // We have already received a cancellation or we have terminated the event processor. Do not process anymore cancellation requests. + // This prevents the root thread from being interrupted at an inappropriate time. + return; +} + } + switch (event.type) { case CANCEL: - /* - * We set the cancel requested flag but the actual cancellation is managed by the run() loop, if called. - */ + // We set the cancel requested flag but the actual cancellation is managed by the run() loop, if called. updateState(FragmentState.CANCELLATION_REQUESTED); - - /* - * Interrupt the thread so that it exits from any blocking operation it could be executing currently. We - * synchronize here to ensure we don't accidentally create a race condition where we interrupt the close out - * procedure of the main thread. - */ - synchronized (myThreadRef) { -final Thread myThread = myThreadRef.get(); -if (myThread != null) { - logger.debug("Interrupting fragment thread {}", myThread.getName()); - myThread.interrupt(); -} - } + // The root was started so we have to interrupt it in case it is performing a blocking operation. + killThread(); + terminate.set(true); --- End diff -- Doh! Thanks for catching ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is finish...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @vrozov @arina-ielchiieva Applied review comments, please let me know if there are anymore comments. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173367485 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -231,9 +261,9 @@ public Void run() throws Exception { while (shouldContinue()) { // Fragment is not cancelled -if (eventProcessor.hasFinishedRequests()) { +if (!recieverFinishedQueue.isEmpty()) { --- End diff -- Thanks for catching removed. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173367462 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -488,47 +527,66 @@ void receiverFinished(FragmentHandle handle) { sendEvent(new FragmentEvent(EventType.RECEIVER_FINISHED, handle)); } +/** + * Tell the {@link FragmentEventProcessor} not to process anymore events. This keeps stray cancellation requests + * from being processed after the root has finished running and interrupts in the root thread have been cleared. + */ +public synchronized void terminate() { + terminate.set(true); +} + @Override protected void processEvent(FragmentEvent event) { + if (terminate.get()) { --- End diff -- Thanks for catching. Fixed. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173367348 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -67,6 +88,11 @@ private volatile RootExec root; private final AtomicReference fragmentState = new AtomicReference<>(FragmentState.AWAITING_ALLOCATION); + /** + * Holds all of the messages sent by downstream recievers that have finished. The {@link FragmentExecutor#run()} thread reads from this queue and passes the + * finished messages to the fragment's {@link RootExec} via the {@link RootExec#receivingFragmentFinished(FragmentHandle)} method. + */ + private final Queue recieverFinishedQueue = new ConcurrentLinkedQueue<>(); --- End diff -- Thanks for catching. Fixed all the misspellings of receiver. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173367305 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -475,6 +483,8 @@ public void drillbitUnregistered(final Set * This is especially important as fragments can take longer to start */ private class FragmentEventProcessor extends EventProcessor { +private boolean terminate = false; +private List finishedHandles = Lists.newArrayList(); --- End diff -- Done ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r173367313 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -475,6 +483,8 @@ public void drillbitUnregistered(final Set * This is especially important as fragments can take longer to start */ private class FragmentEventProcessor extends EventProcessor { +private boolean terminate = false; --- End diff -- Done ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is finish...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @vrozov @arina-ielchiieva Handled multiple receiver finished messages correctly. This PR is ready for another round of review. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is finish...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 Nope never mind I think I spoke too soon. I just realized we may get multiple receivingFragmentFinished requests, one for each downstream receiver. Back to the drawing board. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is finish...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @arina-ielchiieva @vrozov In light of Vlad's comments I have reworked the synchronization model yet again. This change now removes all synchronization from PartitionSenderRootExec and enforces the guarantee that all the lifecycle methods of the PartitionSenderRootExec will only be called by a single Run thread in the FragmentExecutor. Also while making this change I discovered a few other bugs with how cancellations and receiver finishes are handled, so I have addressed those bugs as well. I will go into more detail about what I changed below. # Motivation As Vlad pointed out **close** and **innerNext** are never called concurrently. After closer inspection of the code I also released that currently (in apache master) innerNext and close will always be called by the **FragmentExecutor#run** thread. The only method of PartitionSenderRootExec that is not called by the **FragmentExecutor#run** thread is **receivingFragmentFinished**. In order to simplify the implementation of PartitionSenderRootExec and also simplify the design of the FragmentExecutor I changed the code so that only the **FragmentExecutor#run** thread calls **receivingFragmentFinished** as well. In this way we can remove all the synchronization from PartitionSenderRootExec. This was done by by: 1. Making the event processor save the FragmentHandle in the event that a receiver finish request was sent. 2. After the **root.next()** loop terminates in **FragmentExecutor#run** the eventProcessor is checked to see if a finish request was received. If so **receivingFragmentFinished** is called on root by the **FragmentExecutor#run** method. # Other Bugs ## Processing of multiple termination requests The event processor would process all cancellation and finish requests, even if there is more than one. This doesn't really make sense, since we can only cancel or finish once. So I have changed the event processor to execute only the first termination request and ignore all the others. ## Simultaneous Cancellation and Finishing Since the event processor would process multiple termination requests concurrently it was possible for a cancel and a finish message to be received and processed simultaneously. The results of this were not well defined since **close** and **receivingFragmentFinished** could be called concurrently. # Other Improvements Vlad also pointed out that we did not need the hasCloseoutThread atomic reference, since we were already using the myThreadRef atomic reference. That cleaned up the code a bit. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is cancel...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @vrozov Thanks for catching this, I believe you are right. hasClouseoutThread guarantees innerNext and close won't be called concurrently. However, I still believe innerNext and receivingFragmentFinished could be called concurrently, since the ControlMessageHandler thread executes recievingFragmentFinished. Additionally in rare cases where a limit query is cancelled recievingFragmentFinished and close could be called concurrently as well. While reflecting on your comments I also saw another issue where the root could be blocked on a next call but a Finished event would not cause root to terminate. In light of all of this I actually think the **synchronized** is not sufficient. We will have to have some way to interrupt the execution of the root when we received a finish signal and only close out the resources after receivingFragmentFinished has been called. Similarly if we receive a finish signal we should ignore any cancellation requests instead of trying to cancel and finish simultaneously and vice versa. I will rework the solution to address these issues and let you know when I have an update. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is cancel...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @arina-ielchiieva Applied review comments. ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r172327066 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/work/fragment/FragmentExecutor.java --- @@ -245,6 +244,8 @@ public Void run() throws Exception { // we have a heap out of memory error. The JVM in unstable, exit. CatastrophicFailure.exit(e, "Unable to handle out of memory condition in FragmentExecutor.", -2); } +} catch (InterruptedException e) { + // Swallow interrupted exceptions since we intentionally interrupt the root when cancelling a query --- End diff -- Done ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r172327159 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionSenderRootExec.java --- @@ -358,7 +357,7 @@ public void close() throws Exception { } } - public void sendEmptyBatch(boolean isLast) { + public synchronized void sendEmptyBatch(boolean isLast) { --- End diff -- Done ---
[GitHub] drill pull request #1105: DRILL-6125: Fix possible memory leak when query is...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1105#discussion_r172327106 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/partitionsender/PartitionSenderRootExec.java --- @@ -231,7 +231,7 @@ public boolean innerNext() { } @VisibleForTesting - protected void createPartitioner() throws SchemaChangeException { + protected synchronized void createPartitioner() throws SchemaChangeException { --- End diff -- Removed ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is cancel...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @vrozov The main issue is that InnerNext and Close should not execute concurrently. Even when we are using AtomicReferences and volatile the following sequence of events could happen which could cause a memory leak: 1. Let's say there are two Threads. The **Close thread** which starts at the beginning of the close method, and the **Next thread** which starts at the beginning of the innerNext method. 1. Now let's say the **Next Thread** runs and checks **ok**. Since close has not been called yet **ok** is true. 1. Now the **Next Thread** is after the ok check. 1. The **Close thread** now starta executing. And the close thread clears the partitioner. 1. Now after the partitioner is cleared the **Next Thread** can resume executing. If the next thread receives an OK_SCHEMA he will allocate a new partitioner. Since the OK_SCHEMA message may include records the partitioner may partition some data as well. 1. Now the **Close thread** is done, but there is a partitioner that has not been closed, and we will leak memory. In order to property solve this problem we need to make sure that the innerNext and close methods are mutually exclusive so the above scenario can never happen. The easiest way to do that is to use the synchronized key word. If we use the synchronized keyword then we don't have to use volatile or atomic reference. Also as a side note using synchronized will probably be more efficient since a cache flush would only be triggered at the start of the innerNext and close method. Alternatively if we used volatile and AtomicReference a cache flush would be triggered every time we accessed the ok and partitioner variables. ---
[GitHub] drill issue #1143: DRILL-1491: Support for JDK 8
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1143 @vladimirtkach This test is failing on Travis: ``` TestOutputBatchSize.testFlattenUpperLimit:695->getExpectedSize:58->PhysicalOpUnitTestBase.getReaderListForJsonBatches:483->PhysicalOpUnitTestBase.getRecordReadersForJsonBatches:479->PhysicalOpUnitTestBase.getJsonReadersFromBatchString:505 û TestTimedOut ``` ---
[GitHub] drill pull request #1143: DRILL-1491: Support for JDK 8
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1143#discussion_r171997640 --- Diff: .travis.yml --- @@ -17,9 +17,9 @@ before_install: git fetch --unshallow sudo: required language: java jdk: - - openjdk7 + - openjdk8 cache: directories: - "$HOME/.m2" install: MAVEN_OPTS="-Xms1G -Xmx1G" mvn install --batch-mode -DskipTests=true -Dmaven.javadoc.skip=true -Dmaven.source.skip=true -script: mvn install -DexcludedGroups="org.apache.drill.categories.SlowTest,org.apache.drill.categories.UnlikelyTest,org.apache.drill.categories.SecurityTest" -DforkCount=1 -DmemoryMb=2560 -DdirectMemoryMb=4608 +script: mvn install -DexcludedGroups="org.apache.drill.categories.SlowTest,org.apache.drill.categories.UnlikelyTest,org.apache.drill.categories.SecurityTest" -DforkCount=1 -DmemoryMb=2560 -DdirectMemoryMb=6008 --- End diff -- @vladimirtkach Why the change in direct memory for travis? Did you run into an issue with the previous setting? ---
[GitHub] drill pull request #1148: DRILL-5994: enable configuring number of Jetty acc...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1148#discussion_r171988996 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/WebServer.java --- @@ -154,35 +152,31 @@ public void start() throws Exception { final boolean authEnabled = config.getBoolean(ExecConstants.USER_AUTHENTICATION_ENABLED); -port = config.getInt(ExecConstants.HTTP_PORT); -boolean portHunt = config.getBoolean(ExecConstants.HTTP_PORT_HUNT); -int retry = 0; - -for (; retry < PORT_HUNT_TRIES; retry++) { - embeddedJetty = new Server(new QueuedThreadPool(config.getInt(ExecConstants.WEB_SERVER_THREAD_POOL_MAX))); - embeddedJetty.setHandler(createServletContextHandler(authEnabled)); - embeddedJetty.addConnector(createConnector(port)); - +int port = config.getInt(ExecConstants.HTTP_PORT); +final boolean portHunt = config.getBoolean(ExecConstants.HTTP_PORT_HUNT); +final int acceptors = config.getInt(ExecConstants.HTTP_JETTY_SERVER_ACCEPTORS); +final int selectors = config.getInt(ExecConstants.HTTP_JETTY_SERVER_SELECTORS); +final QueuedThreadPool threadPool = new QueuedThreadPool(2, 2, 6); +embeddedJetty = new Server(threadPool); +embeddedJetty.setHandler(createServletContextHandler(authEnabled)); +ServerConnector connector = createConnector(port, acceptors, selectors); +threadPool.setMaxThreads(1 + connector.getAcceptors() + connector.getSelectorManager().getSelectorCount()); +embeddedJetty.addConnector(connector); +for (int retry = 0; retry < PORT_HUNT_TRIES; retry++) { + connector.setPort(port); try { embeddedJetty.start(); +return; } catch (BindException e) { if (portHunt) { - int nextPort = port + 1; - logger.info("Failed to start on port {}, trying port {}", port, nextPort); - port = nextPort; - embeddedJetty.stop(); + logger.info("Failed to start on port {}, trying port {}", port, ++port, e); --- End diff -- I see thx ---
[GitHub] drill pull request #1148: DRILL-5994: enable configuring number of Jetty acc...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1148#discussion_r171988104 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/WebServer.java --- @@ -411,6 +404,7 @@ private ServerConnector createHttpsConnector(int port) throws Exception { // SSL Connector final ServerConnector sslConnector = new ServerConnector(embeddedJetty, +null, null, null, -1, -1, --- End diff -- Why not use the configured values for HTTPS? ---
[GitHub] drill pull request #1148: DRILL-5994: enable configuring number of Jetty acc...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1148#discussion_r171987969 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/WebServer.java --- @@ -424,10 +418,11 @@ private ServerConnector createHttpsConnector(int port) throws Exception { * @return Initialized {@link ServerConnector} instance for HTTP connections. * @throws Exception */ - private ServerConnector createHttpConnector(int port) throws Exception { + private ServerConnector createHttpConnector(int port, int acceptors, int selectors) throws Exception { logger.info("Setting up HTTP connector for web server"); final HttpConfiguration httpConfig = new HttpConfiguration(); -final ServerConnector httpConnector = new ServerConnector(embeddedJetty, new HttpConnectionFactory(httpConfig)); +final ServerConnector httpConnector = +new ServerConnector(embeddedJetty, null, null, null, acceptors, selectors, new HttpConnectionFactory(httpConfig)); --- End diff -- Why not ``` new ServerConnector(embeddedJetty, acceptors, selectors, new HttpConnectionFactory(httpConfig)); ``` ---
[GitHub] drill pull request #1148: DRILL-5994: enable configuring number of Jetty acc...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1148#discussion_r171986550 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/WebServer.java --- @@ -154,35 +152,31 @@ public void start() throws Exception { final boolean authEnabled = config.getBoolean(ExecConstants.USER_AUTHENTICATION_ENABLED); -port = config.getInt(ExecConstants.HTTP_PORT); -boolean portHunt = config.getBoolean(ExecConstants.HTTP_PORT_HUNT); -int retry = 0; - -for (; retry < PORT_HUNT_TRIES; retry++) { - embeddedJetty = new Server(new QueuedThreadPool(config.getInt(ExecConstants.WEB_SERVER_THREAD_POOL_MAX))); - embeddedJetty.setHandler(createServletContextHandler(authEnabled)); - embeddedJetty.addConnector(createConnector(port)); - +int port = config.getInt(ExecConstants.HTTP_PORT); +final boolean portHunt = config.getBoolean(ExecConstants.HTTP_PORT_HUNT); +final int acceptors = config.getInt(ExecConstants.HTTP_JETTY_SERVER_ACCEPTORS); +final int selectors = config.getInt(ExecConstants.HTTP_JETTY_SERVER_SELECTORS); +final QueuedThreadPool threadPool = new QueuedThreadPool(2, 2, 6); +embeddedJetty = new Server(threadPool); +embeddedJetty.setHandler(createServletContextHandler(authEnabled)); +ServerConnector connector = createConnector(port, acceptors, selectors); +threadPool.setMaxThreads(1 + connector.getAcceptors() + connector.getSelectorManager().getSelectorCount()); +embeddedJetty.addConnector(connector); +for (int retry = 0; retry < PORT_HUNT_TRIES; retry++) { + connector.setPort(port); try { embeddedJetty.start(); +return; } catch (BindException e) { if (portHunt) { - int nextPort = port + 1; - logger.info("Failed to start on port {}, trying port {}", port, nextPort); - port = nextPort; - embeddedJetty.stop(); + logger.info("Failed to start on port {}, trying port {}", port, ++port, e); --- End diff -- Originally I stopped the jetty server in the event that a BindException was throw since it looks like jetty starts it's beans before it starts its connectors (see org.eclipse.jetty.server.Server#doStart). Since the thread pool is treated like a bean by Jetty this means the ThreadPool is started before the connectors are started. So if a connector fails with a BindException then we will have a leaked QueuedThreadPool. My understanding is if we want to release resources created during a failed start we would have to stop the jetty server after a failure. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is cancel...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @arina-ielchiieva @vrozov I believe I have a solution. There were several issues with the original code. 1. It made incorrect assumptions about how cache invalidation works with java **synchronized**. 2. It assumed **innerNext** and **close** would be called sequentially. I believe this PR fixes these issues now and I have gone into more detail about the problems below. # 1. Incorrect Cache Invalidation Assumptions The original code was trying to be smart by trying to reduce synchronization overhead on **innerNext**. So the code in **innerNext** did not synchronize before changing the partitioner object since this would be called often. The code in **close()** and ** receivingFragmentFinished()** synchronized before accessing the partitioner with the intention that this would trigger an update of the partitioner variable state across all threads. Unfortunately, this assumption is invalid (see https://stackoverflow.com/questions/22706739/does-synchronized-guarantee-a-thread-will-see-the-latest-value-of-a-non-volatile). Every thread that accesses a variable must synchronize before accessing a variable in order to properly invalidate cached data on a core. For example if **Thread A** modifies **Variable 1** then **Thread B** synchronizes before accessing **Variable 1** then there is no guarantee **Thread B** will see the most updated value for **Variable 1** since it might . ## Solution In summary the right thing to do is the simple thing. Make the methods synchronized. Unfortunately there is no way to outsmart the system and reduce synchronization overhead without causing race conditions. # 2. Concurrent InnerNext and Close Calls The original code did not consider the case that innerNext was in the middle of execution when close was called. It did try to handle the case where **innerNext** could be called after **close** by setting the **ok** variable. But it didn't even do that right because there was no synchronization around the **ok** variable. ## Solution The right thing to do is the simple thing. Make sure the methods are synchronized so close has to wait until innerNext is done before executing. Also when a query is cancelled the executing thread should be interrupted the thread running innerNext incase it is on a blocking call. ---
[GitHub] drill pull request #1101: DRILL-6032: Made the batch sizing for HashAgg more...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1101#discussion_r171133616 --- Diff: exec/java-exec/src/main/resources/drill-module.conf --- @@ -427,8 +427,8 @@ drill.exec.options: { exec.enable_union_type: false, exec.errors.verbose: false, exec.hashagg.mem_limit: 0, -exec.hashagg.min_batches_per_partition: 2, -exec.hashagg.num_partitions: 32, +exec.hashagg.min_batches_per_partition: 1, --- End diff -- @Ben-Zvi This setting controls the minimum number of batches kept in memory per partition. Making this larger will cause us to consume more memory. Making it smaller makes us consume less memory. Also in general the purpose of this PR was to make the memory calculations more precise and deterministic and it passes all regression tests. ---
[GitHub] drill issue #1105: DRILL-6125: Fix possible memory leak when query is cancel...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1105 @priteshm @arina-ielchiieva I should have updated this PR earlier this week, here is my update. After reflecting on Arina's comments and reading some more docs about how java implements volatile and synchronization, I think this solution might not fix the original race condition. I need to to more reading to get a better understanding. Additionally I realized there is another race condition where two threads are simultaneously calling close and innerNext which could cause a memory leak. Haven't had a chance to dig further this week, so I will try to wrap this up next week. ---
[GitHub] drill issue #1011: Drill 1170: Drill-on-YARN
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1011 @paul-rogers You need to add this dependency to your drill-yarn pom.xml ``` org.apache.drill drill-common ${project.version} tests test ``` ---
[GitHub] drill issue #1101: DRILL-6032: Made the batch sizing for HashAgg more accura...
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1101 @Ben-Zvi I applied your review comment, please let me know if you have any more comments. ---
[GitHub] drill pull request #1101: DRILL-6032: Made the batch sizing for HashAgg more...
Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/1101#discussion_r168609439 --- Diff: exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestHashAggrSpill.java --- @@ -93,7 +93,7 @@ private void testSpill(long maxMem, long numPartitions, long minBatches, int max @Test public void testSimpleHashAggrSpill() throws Exception { testSpill(68_000_000, 16, 2, 2, false, - true, null, 1_200_000, 1, 1, 1, 3); + true, null, 1_200_000, 1, 2, 1, 3); --- End diff -- Fixed. I increased the memory slightly. ---
[GitHub] drill issue #1121: DRILL-6153: Operator framework
Github user ilooner commented on the issue: https://github.com/apache/drill/pull/1121 Hi @paul-rogers, nice to see this code being encapsulated, standardized, and unit tested. There seems to be a Travis test failure with your changes though. ``` Failed tests: TestOperatorRecordBatch.testNormalLifeCycle:156 expected null, but was:<org.apache.drill.test.OperatorFixture$MockFragmentContext@45f1feca> ``` ---