[ 
https://issues.apache.org/jira/browse/HIVE-24138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195318#comment-17195318
 ] 

László Bodor commented on HIVE-24138:
-------------------------------------

[~ayushtkn]: I can see in the pull request that you almost achieved a green run 
without upgrading hadoop/guava, which is promising, but I'm a bit confused 
about the contradictional contents of the consecutive commits...is it possible 
to squash and force push them to easily see an overall picture on the PR? this 
will result in a smaller commit, and maybe helps us to decide how to solve this 
issue finally? 

> Llap external client flow is broken due to netty shading
> --------------------------------------------------------
>
>                 Key: HIVE-24138
>                 URL: https://issues.apache.org/jira/browse/HIVE-24138
>             Project: Hive
>          Issue Type: Bug
>          Components: llap
>            Reporter: Shubham Chaurasia
>            Assignee: Ayush Saxena
>            Priority: Critical
>              Labels: pull-request-available
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We shaded netty in hive-exec in - 
> https://issues.apache.org/jira/browse/HIVE-23073
> This breaks LLAP external client flow on LLAP daemon side - 
> LLAP daemon stacktrace - 
> {code}
> 2020-09-09T18:22:13,413  INFO [TezTR-222977_4_0_0_0_0 
> (4974183244412222977_0004_0_00_000000_0)] llap.LlapOutputFormat: Returning 
> writer for: attempt_4974183244412222977_0004_0_00_000000_0
> 2020-09-09T18:22:13,419 ERROR [TezTR-222977_4_0_0_0_0 
> (4974183244412222977_0004_0_00_000000_0)] tez.MapRecordSource: 
> java.lang.NoSuchMethodError: 
> org.apache.arrow.memory.BufferAllocator.buffer(I)Lorg/apache/hive/io/netty/buffer/ArrowBuf;
>       at 
> org.apache.hadoop.hive.llap.WritableByteChannelAdapter.write(WritableByteChannelAdapter.java:96)
>       at org.apache.arrow.vector.ipc.WriteChannel.write(WriteChannel.java:74)
>       at org.apache.arrow.vector.ipc.WriteChannel.write(WriteChannel.java:57)
>       at 
> org.apache.arrow.vector.ipc.WriteChannel.writeIntLittleEndian(WriteChannel.java:89)
>       at 
> org.apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSerializer.java:88)
>       at 
> org.apache.arrow.vector.ipc.ArrowWriter.ensureStarted(ArrowWriter.java:130)
>       at 
> org.apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.java:102)
>       at 
> org.apache.hadoop.hive.llap.LlapArrowRecordWriter.write(LlapArrowRecordWriter.java:85)
>       at 
> org.apache.hadoop.hive.llap.LlapArrowRecordWriter.write(LlapArrowRecordWriter.java:46)
>       at 
> org.apache.hadoop.hive.ql.exec.vector.filesink.VectorFileSinkArrowOperator.process(VectorFileSinkArrowOperator.java:137)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
>       at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
>       at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:172)
>       at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.deliverVectorizedRowBatch(VectorMapOperator.java:809)
>       at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:842)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>       at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>       at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:75)
>       at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:62)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>       at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:62)
>       at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:38)
>       at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>       at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> {code}
> Arrow method signature mismatch mainly happens due to the fact that arrow 
> contains some classes which are packaged under {{io.netty.buffer.*}} - 
> {code}
> io.netty.buffer.ArrowBuf
> io.netty.buffer.ExpandableByteBuf
> io.netty.buffer.LargeBuffer
> io.netty.buffer.MutableWrappedByteBuf
> io.netty.buffer.PooledByteBufAllocatorL
> io.netty.buffer.UnsafeDirectLittleEndian
> {code}
> Since we have relocated netty, these classes have also been relocated to 
> {{org.apache.hive.io.netty.buffer.*}} and causing {{NoSuchMethodError}}.
> cc [~anishek] [~thejas] [~abstractdog] [~irashid] [~bruce.robbins]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to