[ 
https://issues.apache.org/jira/browse/IMPALA-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17640608#comment-17640608
 ] 

ASF subversion and git services commented on IMPALA-11738:
----------------------------------------------------------

Commit 84fa6d210d3966e5ece8b4ac84ff8bd8780dec4e in impala's branch 
refs/heads/branch-4.2.0 from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=84fa6d210 ]

IMPALA-11738: Hide symbols from compression libraries for libfesupport.so

Recently, dataload has been failing on some configurations
with an error from Hive when initializing zlib native code
in ZlibDecompressor.init(). This error goes away when
libfesupport.so is removed from JAVA_LIBRARY_PATH in
testdata/bin/run-hive-server.sh, so something about
libfesupport.so is interfering with the functioning of
zlib.

libfesupport.so includes several compression libraries
(libbz2, liblz4, libsnappy, libz, libzstd). To avoid these
types of conflicts, this hides the symbols for the
compression libraries in libfesupport.so. That prevents
libhadoop from using the symbols in libfesupport.so.

Testing:
 - Ran an ASAN dataload on Centos 7 (which previously had
   been consistenly failing)
 - Ran precommit job

Change-Id: I55bda6899044ff2ad98134f5954df83f3e10a5cc
Reviewed-on: http://gerrit.cloudera.org:8080/19264
Reviewed-by: Michael Smith <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>


> Data loading failed at 
> load-functional-query-exhaustive-hive-generated-orc-def-block.sql
> ----------------------------------------------------------------------------------------
>
>                 Key: IMPALA-11738
>                 URL: https://issues.apache.org/jira/browse/IMPALA-11738
>             Project: IMPALA
>          Issue Type: Bug
>    Affects Versions: Impala 4.1.1
>            Reporter: Yida Wu
>            Assignee: Joe McDonnell
>            Priority: Major
>
> Ran "./bin/bootstrap_development.sh" to build the system from scratch.
> It seems to crash in hive-server2 when it executes a query
> {code:java}
> select count(*) as mv_count from functional_orc_def.mv1_alltypes_jointbl{code}
> during loading 
> load-functional-query-exhaustive-hive-generated-orc-def-block.sql.
> Found errors in 
> load-functional-query-exhaustive-hive-generated-orc-def-block.sql.log:
> {code:java}
> Unknown HS2 problem when communicating with Thrift server.
> Error: org.apache.thrift.transport.TTransportException: 
> java.net.SocketException: Broken pipe (Write failed) (state=08S01,code=0)
> java.sql.SQLException: org.apache.thrift.transport.TTransportException: 
> java.net.SocketException: Broken pipe (Write failed)
>         at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:225)
>         at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:266)
>         at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:289)
>         at 
> org.apache.hive.beeline.Commands.executeInternal(Commands.java:1067)
>         at org.apache.hive.beeline.Commands.execute(Commands.java:1217)
>         at org.apache.hive.beeline.Commands.sql(Commands.java:1146)
>         at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1504)
>         at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1362)
>         at org.apache.hive.beeline.BeeLine.executeFile(BeeLine.java:1336)
>         at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1134)
>         at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1089)
>         at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:547)
>         at org.apache.hive.beeline.BeeLine.main(BeeLine.java:529)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:232){code}
> Also found a crash jstack:
> {code:java}
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j org.apache.hadoop.io.compress.zlib.ZlibCompressor.initIDs()V+0
> j org.apache.hadoop.io.compress.zlib.ZlibCompressor.<clinit>()V+18
> v ~StubRoutines::call_stub
> j org.apache.hadoop.io.compress.zlib.ZlibFactory.loadNativeZLib()V+6
> j org.apache.hadoop.io.compress.zlib.ZlibFactory.<clinit>()V+12
> v ~StubRoutines::call_stub
> j 
> org.apache.hadoop.io.compress.DefaultCodec.getDecompressorType()Ljava/lang/Class;+4
> j 
> org.apache.hadoop.io.compress.CodecPool.getDecompressor(Lorg/apache/hadoop/io/compress/CompressionCodec;)Lorg/apache/hadoop/io/compress/Decompressor;+4
> j org.apache.hadoop.io.SequenceFile$Reader.init(Z)V+486
> j 
> org.apache.hadoop.io.SequenceFile$Reader.initialize(Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/fs/FSDataInputStream;JJLorg/apache/hadoop/conf/Configuration;Z)V+84
> j 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(Lorg/apache/hadoop/conf/Configuration;[Lorg/apache/hadoop/io/SequenceFile$Reader$Option;)V+407
> j 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/conf/Configuration;)V+17
> j 
> org.apache.hadoop.mapred.SequenceFileRecordReader.<init>(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/mapred/FileSplit;)V+30
> j 
> org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(Lorg/apache/hadoop/mapred/InputSplit;Lorg/apache/hadoop/mapred/JobConf;Lorg/apache/hadoop/mapred/Reporter;)Lorg/apache/hadoop/mapred/RecordReader;+19
> j 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/RecordReader;+12
> j 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader()Lorg/apache/hadoop/mapred/RecordReader;+266
> j 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow()Lorg/apache/hadoop/hive/serde2/objectinspector/InspectableObject;+25
> j org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow()Z+70
> j org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(Ljava/util/List;)Z+170
> j org.apache.hadoop.hive.ql.exec.FetchTask.execute()I+12
> J 22489 C1 org.apache.hadoop.hive.ql.Driver.runInternal(Ljava/lang/String;Z)V 
> (1199 bytes) @ 0x00007f928563b904 [0x00007f9285638600+0x3304]
> J 22488 C1 
> org.apache.hadoop.hive.ql.Driver.run(Ljava/lang/String;Z)Lorg/apache/hadoop/hive/ql/processors/CommandProcessorResponse;
>  (269 bytes) @ 0x00007f928561fb44 [0x00007f928561faa0+0xa4]
> J 19121 C1 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run()Lorg/apache/hadoop/hive/ql/processors/CommandProcessorResponse;
>  (300 bytes) @ 0x00007f9283c0c034 [0x00007f9283c0b4c0+0xb74]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to