[ 
https://issues.apache.org/jira/browse/IMPALA-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Armstrong resolved IMPALA-6046.
-----------------------------------
    Resolution: Won't Fix

Appears to be a JDK bug according to the HADOOP jira. I don't think we can do 
anything on our end. Hasn't happened for a while so hopefully this is solved 
with time and JDK upgrades

> test_partition_metadata_compatibility error: Hive query failing (HADOOP-13809)
> ------------------------------------------------------------------------------
>
>                 Key: IMPALA-6046
>                 URL: https://issues.apache.org/jira/browse/IMPALA-6046
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Infrastructure
>    Affects Versions: Impala 2.11.0, Impala 2.12.0
>            Reporter: Bikramjeet Vig
>            Priority: Major
>              Labels: flaky
>
> for the test 
> metadata/test_partition_metadata.py::TestPartitionMetadata::test_partition_metadata_compatibility,
>  a query to hive using beeline/HS2 is failing.
> From Hive logs:
> {noformat}
> 2017-10-11 17:59:13,631 ERROR transport.TSaslTransport 
> (TSaslTransport.java:open(315)) - SASL negotiation failure
> javax.security.sasl.SaslException: Invalid message format [Caused by 
> java.lang.IllegalStateException: zip file closed]
>       at 
> org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:107)
>       at 
> org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
>       at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
>       at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>       at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>       at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: zip file closed
>       at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634)
>       at java.util.zip.ZipFile.getEntry(ZipFile.java:305)
>       at java.util.jar.JarFile.getEntry(JarFile.java:227)
>       at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128)
>       at 
> sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132)
>       at 
> sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)
>       at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233)
>       at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at 
> javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)
>       at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)
>       at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
>       at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>       at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2606)
>       at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2583)
>       at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2489)
>       at org.apache.hadoop.conf.Configuration.set(Configuration.java:1174)
>       at org.apache.hadoop.conf.Configuration.set(Configuration.java:1146)
>       at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:525)
>       at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:543)
>       at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:437)
>       at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:2803)
>       at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:2761)
>       at 
> org.apache.hive.service.auth.AuthenticationProviderFactory.getAuthenticationProvider(AuthenticationProviderFactory.java:61)
>       at 
> org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(PlainSaslHelper.java:104)
>       at 
> org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:102)
>       ... 8 more
> 2017-10-11 17:59:13,633 INFO  session.SessionState 
> (SessionState.java:dropPathAndUnregisterDeleteOnExit(785)) - Deleted 
> directory: /tmp/hive/jenkins/72505700-e690-4355-bdd2-55db2188a976 on fs with 
> scheme hdfs
> 2017-10-11 17:59:13,635 ERROR server.TThreadPoolServer 
> (TThreadPoolServer.java:run(297)) - Error occurred during processing of 
> message.
> java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
> Invalid message format
>       at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>       at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException: Invalid message 
> format
>       at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>       at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>       at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>       at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>       ... 4 more
> {noformat}
> From Impalad logs:
> {noformat}
> 20:16:05 =================================== FAILURES 
> ===================================
> 20:16:05 metadata/test_partition_metadata.py:135: in 
> test_partition_metadata_compatibility
> 20:16:05     self.run_stmt_in_hive("select * from %s" % FQ_TBL_HIVE)
> 20:16:05 common/impala_test_suite.py:642: in run_stmt_in_hive
> 20:16:05     raise RuntimeError(stderr)
> 20:16:05 E   scan complete in 4ms
> 20:16:05 E   Connecting to jdbc:hive2://localhost:11050
> 20:16:05 E   Unknown HS2 problem when communicating with Thrift server.
> 20:16:05 E   Error: Could not open client transport with JDBC Uri: 
> jdbc:hive2://localhost:11050: Peer indicated failure: Invalid message format 
> (state=08S01,code=0)
> 20:16:05 E   No current connection
> 20:16:05 ---------------------------- Captured stderr setup 
> -----------------------------
> 20:16:05 SET sync_ddl=False;
> 20:16:05 -- executing against localhost:21000
> 20:16:05 DROP DATABASE IF EXISTS 
> `test_partition_metadata_compatibility_df46a41a` CASCADE;
> 20:16:05 
> 20:16:05 SET sync_ddl=False;
> 20:16:05 -- executing against localhost:21000
> 20:16:05 CREATE DATABASE `test_partition_metadata_compatibility_df46a41a`;
> 20:16:05 
> 20:16:05 MainThread: Created database 
> "test_partition_metadata_compatibility_df46a41a" for test ID 
> "metadata/test_partition_metadata.py::TestPartitionMetadata::()::test_partition_metadata_compatibility[exec_option:
>  {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 
> 'disable_codegen': False, 'abort_on_error': 1, 
> 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]"
> 20:16:05 ----------------------------- Captured stderr call 
> -----------------------------
> 20:16:05 -- executing against localhost:21000
> 20:16:05 invalidate metadata 
> test_partition_metadata_compatibility_df46a41a.part_parquet_tbl_hive;
> 20:16:05 
> 20:16:05 -- executing against localhost:21000
> 20:16:05 compute stats 
> test_partition_metadata_compatibility_df46a41a.part_parquet_tbl_hive;
> 20:16:05 
> 20:16:05 -- executing against localhost:21000
> 20:16:05 select * from 
> test_partition_metadata_compatibility_df46a41a.part_parquet_tbl_hive;
> {noformat}
> Seems like this is related to issue HADOOP-13809 and according to that, it 
> appears to be a random failure. Will have to keep an eye out to see if this 
> is consistently failing.
> Update 1: turns out ,this really is a random failure, the next build ran 
> without any problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to