[
https://issues.apache.org/jira/browse/DRILL-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343842#comment-16343842
]
ASF GitHub Bot commented on DRILL-5846:
---------------------------------------
Github user sachouche commented on a diff in the pull request:
https://github.com/apache/drill/pull/1060#discussion_r164532290
--- Diff: exec/memory/base/src/main/java/io/netty/buffer/DrillBuf.java ---
@@ -703,7 +703,18 @@ protected void _setLong(int index, long value) {
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int
length) {
- udle.getBytes(index + offset, dst, dstIndex, length);
+ final int BULK_COPY_THR = 1024;
--- End diff --
You are right, I'll put more information on this optimization:
- During code profiling, I have noticed that getBytes() doesn't perform
well when called with small length (lower than 1k)
- Its throughput improves as the length increases
Analysis :
- Java exposes two intrinsics for writing to direct memory: putByte and
copyMemory
- The JVM is able to inline memory access (no function call) for putByte
- copyMemory is a bulk API and this internally invokes libc memcpy
(requires function call)
- The rational is that we are willing to incur a function call if the
associated processing is larger than the overhead; this is almost never the
case for small memory accesses.
> Improve Parquet Reader Performance for Flat Data types
> -------------------------------------------------------
>
> Key: DRILL-5846
> URL: https://issues.apache.org/jira/browse/DRILL-5846
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - Parquet
> Affects Versions: 1.11.0
> Reporter: salim achouche
> Assignee: salim achouche
> Priority: Major
> Labels: performance
> Fix For: 1.13.0
>
>
> The Parquet Reader is a key use-case for Drill. This JIRA is an attempt to
> further improve the Parquet Reader performance as several users reported that
> Parquet parsing represents the lion share of the overall query execution. It
> tracks Flat Data types only as Nested DTs might involve functional and
> processing enhancements (e.g., a nested column can be seen as a Document;
> user might want to perform operations scoped at the document level that is no
> need to span all rows). Another JIRA will be created to handle the nested
> columns use-case.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)