[
https://issues.apache.org/jira/browse/DRILL-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956321#comment-15956321
]
Khurram Faraaz commented on DRILL-4847:
---------------------------------------
[~zelaine] Here is the stack trace from drillbit.log, didn't share it earlier
because there was no xsort.managed.ExternalSortBatch present in the stack trace
or anywhere else in the drillbit.log.
We do see these two lines in the stack trace, is this from the managed external
sort ?
xsort.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:584)
xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:428)
{noformat}
0: jdbc:drill:schema=dfs.tmp> SELECT clientname, audiencekey, spendprofileid,
postalcd, provincecd, provincename, postalcode_json, country_json,
province_json, town_json, dma_json, msa_json, ROW_NUMBER() OVER (PARTITION BY
spendprofileid ORDER BY (CASE WHEN postalcd IS NULL THEN 9 ELSE 0 END) ASC,
provincecd ASC) as rn FROM `MD593.parquet` limit 3;
Error: RESOURCE ERROR: One or more nodes ran out of memory while executing the
query.
Failure while allocating buffer.
Fragment 0:0
[Error Id: 757f5f08-b02c-4176-870c-d4ed61f1a769 on centos-01.qa.lab:31010]
(state=,code=0)
{noformat}
{noformat}
2017-04-05 05:17:27,473 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.drill.exec.work.foreman.Foreman - Query text for query id
271b8218-1dec-392b-e82f-e856b7db232e: SELECT clientname, audiencekey,
spendprofileid, postalcd, provincecd, provincename, postalcode_json,
country_json, province_json, town_json, dma_json, msa_json, ROW_NUMBER() OVER
(PARTITION BY spendprofileid ORDER BY (CASE WHEN postalcd IS NULL THEN 9 ELSE
0 END) ASC, provincecd ASC) as rn FROM `MD593.parquet` limit 3
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,558 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms,
numFiles: 1
2017-04-05 05:17:27,566 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.parquet.Metadata - Took 0 ms to get file statuses
2017-04-05 05:17:27,569 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 1 out of 1
using 1 threads. Time: 2ms total, 2.823541ms avg, 2ms max.
2017-04-05 05:17:27,569 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.parquet.Metadata - Fetch parquet metadata: Executed 1 out of 1
using 1 threads. Earliest start: 0.535000 μs, Latest start: 0.535000 μs,
Average start: 0.535000 μs .
2017-04-05 05:17:27,569 [271b8218-1dec-392b-e82f-e856b7db232e:foreman] INFO
o.a.d.exec.store.parquet.Metadata - Took 2 ms to read file metadata
2017-04-05 05:17:27,681 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.w.fragment.FragmentExecutor - 271b8218-1dec-392b-e82f-e856b7db232e:0:0:
State change requested AWAITING_ALLOCATION --> RUNNING
2017-04-05 05:17:27,681 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.w.f.FragmentStatusReporter - 271b8218-1dec-392b-e82f-e856b7db232e:0:0:
State to report: RUNNING
2017-04-05 05:17:27,776 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Merging and spilling to
/tmp/drill/spill/271b8218-1dec-392b-e82f-e856b7db232e_majorfragment0_minorfragment0_operator8/0
2017-04-05 05:17:27,800 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to
/tmp/drill/spill/271b8218-1dec-392b-e82f-e856b7db232e_majorfragment0_minorfragment0_operator8/0
2017-04-05 05:17:27,818 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Merging and spilling to
/tmp/drill/spill/271b8218-1dec-392b-e82f-e856b7db232e_majorfragment0_minorfragment0_operator8/1
2017-04-05 05:17:27,840 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to
/tmp/drill/spill/271b8218-1dec-392b-e82f-e856b7db232e_majorfragment0_minorfragment0_operator8/1
2017-04-05 05:17:27,858 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Merging spills
2017-04-05 05:17:27,864 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Merging and spilling to
/tmp/drill/spill/271b8218-1dec-392b-e82f-e856b7db232e_majorfragment0_minorfragment0_operator8/2
2017-04-05 05:17:27,902 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to
/tmp/drill/spill/271b8218-1dec-392b-e82f-e856b7db232e_majorfragment0_minorfragment0_operator8/2
2017-04-05 05:17:27,906 [271b8218-1dec-392b-e82f-e856b7db232e:frag:0:0] INFO
o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes
ran out of memory while executing the query. (Failure while allocating buffer.)
org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more
nodes ran out of memory while executing the query.
Failure while allocating buffer.
[Error Id: 757f5f08-b02c-4176-870c-d4ed61f1a769 ]
at
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:242)
[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_91]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure while
allocating buffer.
at
org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:199)
~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.<init>(RepeatedMapVector.java:331)
~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.<init>(RepeatedMapVector.java:307)
~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.vector.complex.RepeatedMapVector.getTransferPair(RepeatedMapVector.java:161)
~[vector-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.SimpleVectorWrapper.cloneAndTransfer(SimpleVectorWrapper.java:66)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.VectorContainer.cloneAndTransfer(VectorContainer.java:205)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.VectorContainer.getTransferClone(VectorContainer.java:157)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:584)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:428)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.window.WindowFrameRecordBatch.innerNext(WindowFrameRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.8.0_91]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_91]
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
~[hadoop-common-2.7.0-mapr-1607.jar:na]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
... 4 common frames omitted
{noformat}
> Window function query results in OOM Exception.
> -----------------------------------------------
>
> Key: DRILL-4847
> URL: https://issues.apache.org/jira/browse/DRILL-4847
> Project: Apache Drill
> Issue Type: Bug
> Components: Execution - Flow
> Affects Versions: 1.8.0
> Environment: 4 node cluster CentOS
> Reporter: Khurram Faraaz
> Assignee: Paul Rogers
> Priority: Critical
> Labels: window_function
> Attachments: drillbit.log
>
>
> Window function query results in OOM Exception.
> Drill version 1.8.0-SNAPSHOT git commit ID: 38ce31ca
> MapRBuildVersion 5.1.0.37549.GA
> {noformat}
> 0: jdbc:drill:schema=dfs.tmp> SELECT clientname, audiencekey, spendprofileid,
> postalcd, provincecd, provincename, postalcode_json, country_json,
> province_json, town_json, dma_json, msa_json, ROW_NUMBER() OVER (PARTITION BY
> spendprofileid ORDER BY (CASE WHEN postalcd IS NULL THEN 9 ELSE 0 END) ASC,
> provincecd ASC) as rn FROM `MD593.parquet` limit 3;
> Error: RESOURCE ERROR: One or more nodes ran out of memory while executing
> the query.
> Failure while allocating buffer.
> Fragment 0:0
> [Error Id: 2287fe71-f0cb-469a-a563-11580fceb1c5 on centos-01.qa.lab:31010]
> (state=,code=0)
> {noformat}
> Stack trace from drillbit.log
> {noformat}
> 2016-08-16 07:25:44,590 [284d4006-9f9d-b893-9352-4f54f9b1d52a:foreman] INFO
> o.a.drill.exec.work.foreman.Foreman - Query text for query id
> 284d4006-9f9d-b893-9352-4f54f9b1d52a: SELECT clientname, audiencekey,
> spendprofileid, postalcd, provincecd, provincename, postalcode_json,
> country_json, province_json, town_json, dma_json, msa_json, ROW_NUMBER() OVER
> (PARTITION BY spendprofileid ORDER BY (CASE WHEN postalcd IS NULL THEN 9
> ELSE 0 END) ASC, provincecd ASC) as rn FROM `MD593.parquet` limit 3
> ...
> 2016-08-16 07:25:46,273 [284d4006-9f9d-b893-9352-4f54f9b1d52a:frag:0:0] INFO
> o.a.d.e.p.i.xsort.ExternalSortBatch - Completed spilling to
> /tmp/drill/spill/284d4006-9f9d-b893-9352-4f54f9b1d52a_majorfragment0_minorfragment0_operator8/2
> 2016-08-16 07:25:46,283 [284d4006-9f9d-b893-9352-4f54f9b1d52a:frag:0:0] INFO
> o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred
> org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more
> nodes ran out of memory while executing the query.
> Failure while allocating buffer.
> [Error Id: 2287fe71-f0cb-469a-a563-11580fceb1c5 ]
> at
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
> ~[drill-common-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:242)
> [drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> [na:1.7.0_101]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> [na:1.7.0_101]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure
> while allocating buffer.
> at
> org.apache.drill.exec.vector.NullableVarCharVector.allocateNew(NullableVarCharVector.java:187)
> ~[vector-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.<init>(RepeatedMapVector.java:331)
> ~[vector-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.<init>(RepeatedMapVector.java:307)
> ~[vector-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.vector.complex.RepeatedMapVector.getTransferPair(RepeatedMapVector.java:161)
> ~[vector-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.SimpleVectorWrapper.cloneAndTransfer(SimpleVectorWrapper.java:66)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.VectorContainer.cloneAndTransfer(VectorContainer.java:204)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.VectorContainer.getTransferClone(VectorContainer.java:157)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.mergeAndSpill(ExternalSortBatch.java:569)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.xsort.ExternalSortBatch.innerNext(ExternalSortBatch.java:414)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.window.WindowFrameRecordBatch.innerNext(WindowFrameRecordBatch.java:108)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
> ~[drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.7.0_101]
> at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_101]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
> ~[hadoop-common-2.7.0-mapr-1607.jar:na]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
> [drill-java-exec-1.8.0-SNAPSHOT.jar:1.8.0-SNAPSHOT]
> ... 4 common frames omitted
> {noformat}
> Full JSON profile
> {noformat}
> {
> "id": {
> "part1": 2904047731915733000,
> "part2": -7831109575658843000
> },
> "type": 1,
> "start": 1471332344590,
> "end": 1471332346309,
> "query": "SELECT clientname, audiencekey, spendprofileid, postalcd,
> provincecd, provincename, postalcode_json, country_json, province_json,
> town_json, dma_json, msa_json, ROW_NUMBER() OVER (PARTITION BY spendprofileid
> ORDER BY (CASE WHEN postalcd IS NULL THEN 9 ELSE 0 END) ASC, provincecd ASC)
> as rn FROM `MD593.parquet` limit 3",
> "plan": "00-00 Screen : rowType = RecordType(ANY clientname, ANY
> audiencekey, ANY spendprofileid, ANY postalcd, ANY provincecd, ANY
> provincename, ANY postalcode_json, ANY country_json, ANY province_json, ANY
> town_json, ANY dma_json, ANY msa_json, BIGINT rn): rowcount = 3.0, cumulative
> cost = {442769.3 rows, 1.9145930245887678E7 cpu, 0.0 io, 0.0 network,
> 9209408.0 memory}, id = 17764\n00-01 Project(clientname=[$0],
> audiencekey=[$1], spendprofileid=[$2], postalcd=[$3], provincecd=[$4],
> provincename=[$5], postalcode_json=[$6], country_json=[$7],
> province_json=[$8], town_json=[$9], dma_json=[$10], msa_json=[$11], rn=[$12])
> : rowType = RecordType(ANY clientname, ANY audiencekey, ANY spendprofileid,
> ANY postalcd, ANY provincecd, ANY provincename, ANY postalcode_json, ANY
> country_json, ANY province_json, ANY town_json, ANY dma_json, ANY msa_json,
> BIGINT rn): rowcount = 3.0, cumulative cost = {442769.0 rows,
> 1.9145929945887677E7 cpu, 0.0 io, 0.0 network, 9209408.0 memory}, id =
> 17763\n00-02 SelectionVectorRemover : rowType = RecordType(ANY
> clientname, ANY audiencekey, ANY spendprofileid, ANY postalcd, ANY
> provincecd, ANY provincename, ANY postalcode_json, ANY country_json, ANY
> province_json, ANY town_json, ANY dma_json, ANY msa_json, BIGINT $12):
> rowcount = 3.0, cumulative cost = {442769.0 rows, 1.9145929945887677E7 cpu,
> 0.0 io, 0.0 network, 9209408.0 memory}, id = 17762\n00-03
> Limit(fetch=[3]) : rowType = RecordType(ANY clientname, ANY audiencekey, ANY
> spendprofileid, ANY postalcd, ANY provincecd, ANY provincename, ANY
> postalcode_json, ANY country_json, ANY province_json, ANY town_json, ANY
> dma_json, ANY msa_json, BIGINT $12): rowcount = 3.0, cumulative cost =
> {442766.0 rows, 1.9145926945887677E7 cpu, 0.0 io, 0.0 network, 9209408.0
> memory}, id = 17761\n00-04 Limit(fetch=[3]) : rowType =
> RecordType(ANY clientname, ANY audiencekey, ANY spendprofileid, ANY postalcd,
> ANY provincecd, ANY provincename, ANY postalcode_json, ANY country_json, ANY
> province_json, ANY town_json, ANY dma_json, ANY msa_json, BIGINT $12):
> rowcount = 3.0, cumulative cost = {442763.0 rows, 1.9145914945887677E7 cpu,
> 0.0 io, 0.0 network, 9209408.0 memory}, id = 17760\n00-05
> Project(clientname=[$0], audiencekey=[$1], spendprofileid=[$2],
> postalcd=[$3], provincecd=[$4], provincename=[$5], postalcode_json=[$6],
> country_json=[$7], province_json=[$8], town_json=[$9], dma_json=[$10],
> msa_json=[$11], $12=[$13]) : rowType = RecordType(ANY clientname, ANY
> audiencekey, ANY spendprofileid, ANY postalcd, ANY provincecd, ANY
> provincename, ANY postalcode_json, ANY country_json, ANY province_json, ANY
> town_json, ANY dma_json, ANY msa_json, BIGINT $12): rowcount = 88552.0,
> cumulative cost = {442760.0 rows, 1.9145902945887677E7 cpu, 0.0 io, 0.0
> network, 9209408.0 memory}, id = 17759\n00-06
> Window(window#0=[window(partition {2} order by [12, 4] rows between UNBOUNDED
> PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) : rowType = RecordType(ANY
> clientname, ANY audiencekey, ANY spendprofileid, ANY postalcd, ANY
> provincecd, ANY provincename, ANY postalcode_json, ANY country_json, ANY
> province_json, ANY town_json, ANY dma_json, ANY msa_json, INTEGER $12, BIGINT
> w0$o0): rowcount = 88552.0, cumulative cost = {442760.0 rows,
> 1.9145902945887677E7 cpu, 0.0 io, 0.0 network, 9209408.0 memory}, id =
> 17758\n00-07 SelectionVectorRemover : rowType =
> RecordType(ANY clientname, ANY audiencekey, ANY spendprofileid, ANY postalcd,
> ANY provincecd, ANY provincename, ANY postalcode_json, ANY country_json, ANY
> province_json, ANY town_json, ANY dma_json, ANY msa_json, INTEGER $12):
> rowcount = 88552.0, cumulative cost = {354208.0 rows, 1.8968798945887677E7
> cpu, 0.0 io, 0.0 network, 9209408.0 memory}, id = 17757\n00-08
> Sort(sort0=[$2], sort1=[$12], sort2=[$4], dir0=[ASC], dir1=[ASC],
> dir2=[ASC]) : rowType = RecordType(ANY clientname, ANY audiencekey, ANY
> spendprofileid, ANY postalcd, ANY provincecd, ANY provincename, ANY
> postalcode_json, ANY country_json, ANY province_json, ANY town_json, ANY
> dma_json, ANY msa_json, INTEGER $12): rowcount = 88552.0, cumulative cost =
> {265656.0 rows, 1.8880246945887677E7 cpu, 0.0 io, 0.0 network, 9209408.0
> memory}, id = 17756\n00-09 Project(clientname=[$0],
> audiencekey=[$1], spendprofileid=[$2], postalcd=[$3], provincecd=[$4],
> provincename=[$5], postalcode_json=[$6], country_json=[$7],
> province_json=[$8], town_json=[$9], dma_json=[$10], msa_json=[$11],
> $12=[CASE(IS NULL($3), 9, 0)]) : rowType = RecordType(ANY clientname, ANY
> audiencekey, ANY spendprofileid, ANY postalcd, ANY provincecd, ANY
> provincename, ANY postalcode_json, ANY country_json, ANY province_json, ANY
> town_json, ANY dma_json, ANY msa_json, INTEGER $12): rowcount = 88552.0,
> cumulative cost = {177104.0 rows, 1416832.0 cpu, 0.0 io, 0.0 network, 0.0
> memory}, id = 17755\n00-10
> Scan(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath
> [path=maprfs:///tmp/MD593.parquet]], selectionRoot=maprfs:/tmp/MD593.parquet,
> numFiles=1, usedMetadataFile=false, columns=[`clientname`, `audiencekey`,
> `spendprofileid`, `postalcd`, `provincecd`, `provincename`,
> `postalcode_json`, `country_json`, `province_json`, `town_json`, `dma_json`,
> `msa_json`]]]) : rowType = RecordType(ANY clientname, ANY audiencekey, ANY
> spendprofileid, ANY postalcd, ANY provincecd, ANY provincename, ANY
> postalcode_json, ANY country_json, ANY province_json, ANY town_json, ANY
> dma_json, ANY msa_json): rowcount = 88552.0, cumulative cost = {88552.0 rows,
> 1062624.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 17754\n",
> "foreman": {
> "address": "centos-01.qa.lab",
> "userPort": 31010,
> "controlPort": 31011,
> "dataPort": 31012
> },
> "state": 4,
> "totalFragments": 1,
> "finishedFragments": 0,
> "fragmentProfile": [
> {
> "majorFragmentId": 0,
> "minorFragmentProfile": [
> {
> "state": 2,
> "minorFragmentId": 0,
> "operatorProfile": [
> {
> "inputProfile": [
> {
> "records": 16000,
> "batches": 4,
> "schemas": 1
> }
> ],
> "operatorId": 10,
> "operatorType": 21,
> "setupNanos": 0,
> "processNanos": 494393361,
> "peakLocalMemoryAllocated": 59189520,
> "waitNanos": 130630128
> },
> {
> "inputProfile": [
> {
> "records": 16000,
> "batches": 4,
> "schemas": 1
> }
> ],
> "operatorId": 9,
> "operatorType": 10,
> "setupNanos": 24628272,
> "processNanos": 11355984,
> "peakLocalMemoryAllocated": 56414208,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 16000,
> "batches": 4,
> "schemas": 1
> }
> ],
> "operatorId": 8,
> "operatorType": 17,
> "setupNanos": 0,
> "processNanos": 421184837,
> "peakLocalMemoryAllocated": 125591168,
> "metric": [
> {
> "metricId": 0,
> "longValue": 3
> },
> {
> "metricId": 2,
> "longValue": 2
> }
> ],
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 7,
> "operatorType": 14,
> "setupNanos": 1530458,
> "processNanos": 1679437,
> "peakLocalMemoryAllocated": 1437696,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 6,
> "operatorType": 34,
> "setupNanos": 0,
> "processNanos": 56384281,
> "peakLocalMemoryAllocated": 1503232,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 5,
> "operatorType": 10,
> "setupNanos": 5591165,
> "processNanos": 1524417,
> "peakLocalMemoryAllocated": 1064960,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 4,
> "operatorType": 7,
> "setupNanos": 2095858,
> "processNanos": 177317,
> "peakLocalMemoryAllocated": 0,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 3,
> "operatorType": 7,
> "setupNanos": 1505764,
> "processNanos": 171117,
> "peakLocalMemoryAllocated": 0,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 2,
> "operatorType": 14,
> "setupNanos": 44027837,
> "processNanos": 4175560,
> "peakLocalMemoryAllocated": 1363970,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 1,
> "operatorType": 10,
> "setupNanos": 5729336,
> "processNanos": 2167036,
> "peakLocalMemoryAllocated": 1363970,
> "waitNanos": 0
> },
> {
> "inputProfile": [
> {
> "records": 0,
> "batches": 1,
> "schemas": 1
> }
> ],
> "operatorId": 0,
> "operatorType": 13,
> "setupNanos": 0,
> "processNanos": 2110139,
> "peakLocalMemoryAllocated": 0,
> "metric": [
> {
> "metricId": 0,
> "longValue": 0
> }
> ],
> "waitNanos": 65789
> }
> ],
> "startTime": 1471332344836,
> "endTime": 1471332346245,
> "memoryUsed": 138190672,
> "maxMemoryUsed": 141423888,
> "endpoint": {
> "address": "centos-01.qa.lab",
> "userPort": 31010,
> "controlPort": 31011,
> "dataPort": 31012
> },
> "lastUpdate": 1471332346247,
> "lastProgress": 1471332346247
> }
> ]
> }
> ],
> "user": "anonymous",
> "error": "RESOURCE ERROR: Drill Remote Exception\n\n",
> "verboseError": "RESOURCE ERROR: Drill Remote Exception\n\n\n\n",
> "errorId": "ec5e1c2e-b4a6-4b61-9fb7-0394922b09a5",
> "errorNode": "centos-01.qa.lab:31010"
> }
> {noformat}
>
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)