[
https://issues.apache.org/jira/browse/IMPALA-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16704083#comment-16704083
]
Tim Armstrong commented on IMPALA-6046:
---------------------------------------
Saw this on a data load on a Impala 2.12-based branch:
{noformat}
12:45:27 Started Loading functional-query data in background; pid 9552.
12:45:27 Started Loading TPC-H data in background; pid 9553.
12:45:27 Loading functional-query data (logging to
/data/jenkins/workspace/impala-cdh5-trunk-core-data-load/repos/Impala/logs/data_loading/load-functional-query.log)...
12:45:27 Started Loading TPC-DS data in background; pid 9554.
12:45:27 Loading TPC-H data (logging to
/data/jenkins/workspace/impala-cdh5-trunk-core-data-load/repos/Impala/logs/data_loading/load-tpch.log)...
12:45:27 Loading TPC-DS data (logging to
/data/jenkins/workspace/impala-cdh5-trunk-core-data-load/repos/Impala/logs/data_loading/load-tpcds.log)...
12:57:46 FAILED (Took: 12 min 19 sec)
12:57:46 'load-data functional-query exhaustive' failed. Tail of log:
12:57:46 at
org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:585)
12:57:46 at
org.apache.hadoop.hbase.client.HBaseAdmin$4.call(HBaseAdmin.java:574)
12:57:46 at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
12:57:46 ... 39 more
12:57:46 Caused by: java.lang.NullPointerException: Inflater has been closed
12:57:46 at java.util.zip.Inflater.ensureOpen(Inflater.java:389)
12:57:46 at java.util.zip.Inflater.inflate(Inflater.java:257)
12:57:46 at
java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
12:57:46 at java.io.FilterInputStream.read(FilterInputStream.java:133)
12:57:46 at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
12:57:46 at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
12:57:46 at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
12:57:46 at java.io.InputStreamReader.read(InputStreamReader.java:184)
12:57:46 at java.io.BufferedReader.fill(BufferedReader.java:154)
12:57:46 at java.io.BufferedReader.readLine(BufferedReader.java:317)
12:57:46 at java.io.BufferedReader.readLine(BufferedReader.java:382)
12:57:46 at
javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
12:57:46 at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
12:57:46 at
javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
12:57:46 at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2676)
12:57:46 at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2653)
12:57:46 at
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2559)
12:57:46 at
org.apache.hadoop.conf.Configuration.set(Configuration.java:1244)
12:57:46 at
org.apache.hadoop.conf.Configuration.set(Configuration.java:1216)
12:57:46 at
org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1561)
12:57:46 at
org.apache.hadoop.hbase.io.compress.Compression$Algorithm.<init>(Compression.java:249)
12:57:46 at
org.apache.hadoop.hbase.io.compress.Compression$Algorithm.<init>(Compression.java:105)
12:57:46 at
org.apache.hadoop.hbase.io.compress.Compression$Algorithm$5.<init>(Compression.java:212)
12:57:46 at
org.apache.hadoop.hbase.io.compress.Compression$Algorithm.<clinit>(Compression.java:212)
12:57:46 ... 44 more
12:57:46 )
12:57:46 at
org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:294)
12:57:46 at
org.apache.hive.beeline.Commands.executeInternal(Commands.java:989)
12:57:46 at org.apache.hive.beeline.Commands.execute(Commands.java:1180)
12:57:46 at org.apache.hive.beeline.Commands.sql(Commands.java:1094)
12:57:46 at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1180)
12:57:46 at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1013)
12:57:46 at org.apache.hive.beeline.BeeLine.executeFile(BeeLine.java:987)
12:57:46 at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:914)
12:57:46 at
org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:518)
12:57:46 at org.apache.hive.beeline.BeeLine.main(BeeLine.java:501)
12:57:46 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
12:57:46 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
12:57:46 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
12:57:46 at java.lang.reflect.Method.invoke(Method.java:606)
12:57:46 at org.apache.hadoop.util.RunJar.run(RunJar.java:226)
12:57:46 at org.apache.hadoop.util.RunJar.main(RunJar.java:141)
12:57:46
12:57:46 Closing: 0: jdbc:hive2://localhost:11050/default;auth=none
12:57:46 Error executing file from Hive:
load-functional-query-exhaustive-hive-generated.sql
12:57:46 Background task Loading functional-query data (pid 9552) failed.
13:00:32 Loading workload 'tpch' using exploration strategy 'core' OK (Took:
15 min 5 sec)
13:00:59 Loading workload 'tpcds' using exploration strategy 'core' OK (Took:
15 min 32 sec)
13:00:59 Error in
/data/jenkins/workspace/impala-cdh5-trunk-core-data-load/repos/Impala/testdata/bin/create-load-data.sh
at line 85: ;;
{noformat}
> test_partition_metadata_compatibility error: Hive query failing (HADOOP-13809)
> ------------------------------------------------------------------------------
>
> Key: IMPALA-6046
> URL: https://issues.apache.org/jira/browse/IMPALA-6046
> Project: IMPALA
> Issue Type: Bug
> Components: Infrastructure
> Affects Versions: Impala 2.11.0, Impala 2.12.0
> Reporter: Bikramjeet Vig
> Priority: Major
> Labels: flaky
>
> for the test
> metadata/test_partition_metadata.py::TestPartitionMetadata::test_partition_metadata_compatibility,
> a query to hive using beeline/HS2 is failing.
> From Hive logs:
> {noformat}
> 2017-10-11 17:59:13,631 ERROR transport.TSaslTransport
> (TSaslTransport.java:open(315)) - SASL negotiation failure
> javax.security.sasl.SaslException: Invalid message format [Caused by
> java.lang.IllegalStateException: zip file closed]
> at
> org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:107)
> at
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
> at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
> at
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> at
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: zip file closed
> at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634)
> at java.util.zip.ZipFile.getEntry(ZipFile.java:305)
> at java.util.jar.JarFile.getEntry(JarFile.java:227)
> at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128)
> at
> sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132)
> at
> sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
> at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233)
> at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94)
> at java.security.AccessController.doPrivileged(Native Method)
> at
> javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)
> at
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)
> at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
> at
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
> at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2606)
> at
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2583)
> at
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2489)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1174)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1146)
> at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:525)
> at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:543)
> at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:437)
> at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:2803)
> at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:2761)
> at
> org.apache.hive.service.auth.AuthenticationProviderFactory.getAuthenticationProvider(AuthenticationProviderFactory.java:61)
> at
> org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(PlainSaslHelper.java:104)
> at
> org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:102)
> ... 8 more
> 2017-10-11 17:59:13,633 INFO session.SessionState
> (SessionState.java:dropPathAndUnregisterDeleteOnExit(785)) - Deleted
> directory: /tmp/hive/jenkins/72505700-e690-4355-bdd2-55db2188a976 on fs with
> scheme hdfs
> 2017-10-11 17:59:13,635 ERROR server.TThreadPoolServer
> (TThreadPoolServer.java:run(297)) - Error occurred during processing of
> message.
> java.lang.RuntimeException: org.apache.thrift.transport.TTransportException:
> Invalid message format
> at
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
> at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException: Invalid message
> format
> at
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
> at
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
> at
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
> ... 4 more
> {noformat}
> From Impalad logs:
> {noformat}
> 20:16:05 =================================== FAILURES
> ===================================
> 20:16:05 metadata/test_partition_metadata.py:135: in
> test_partition_metadata_compatibility
> 20:16:05 self.run_stmt_in_hive("select * from %s" % FQ_TBL_HIVE)
> 20:16:05 common/impala_test_suite.py:642: in run_stmt_in_hive
> 20:16:05 raise RuntimeError(stderr)
> 20:16:05 E scan complete in 4ms
> 20:16:05 E Connecting to jdbc:hive2://localhost:11050
> 20:16:05 E Unknown HS2 problem when communicating with Thrift server.
> 20:16:05 E Error: Could not open client transport with JDBC Uri:
> jdbc:hive2://localhost:11050: Peer indicated failure: Invalid message format
> (state=08S01,code=0)
> 20:16:05 E No current connection
> 20:16:05 ---------------------------- Captured stderr setup
> -----------------------------
> 20:16:05 SET sync_ddl=False;
> 20:16:05 -- executing against localhost:21000
> 20:16:05 DROP DATABASE IF EXISTS
> `test_partition_metadata_compatibility_df46a41a` CASCADE;
> 20:16:05
> 20:16:05 SET sync_ddl=False;
> 20:16:05 -- executing against localhost:21000
> 20:16:05 CREATE DATABASE `test_partition_metadata_compatibility_df46a41a`;
> 20:16:05
> 20:16:05 MainThread: Created database
> "test_partition_metadata_compatibility_df46a41a" for test ID
> "metadata/test_partition_metadata.py::TestPartitionMetadata::()::test_partition_metadata_compatibility[exec_option:
> {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000,
> 'disable_codegen': False, 'abort_on_error': 1,
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]"
> 20:16:05 ----------------------------- Captured stderr call
> -----------------------------
> 20:16:05 -- executing against localhost:21000
> 20:16:05 invalidate metadata
> test_partition_metadata_compatibility_df46a41a.part_parquet_tbl_hive;
> 20:16:05
> 20:16:05 -- executing against localhost:21000
> 20:16:05 compute stats
> test_partition_metadata_compatibility_df46a41a.part_parquet_tbl_hive;
> 20:16:05
> 20:16:05 -- executing against localhost:21000
> 20:16:05 select * from
> test_partition_metadata_compatibility_df46a41a.part_parquet_tbl_hive;
> {noformat}
> Seems like this is related to issue HADOOP-13809 and according to that, it
> appears to be a random failure. Will have to keep an eye out to see if this
> is consistently failing.
> Update 1: turns out ,this really is a random failure, the next build ran
> without any problems.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]