[ 
https://issues.apache.org/jira/browse/IMPALA-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17220858#comment-17220858
 ] 

Joe McDonnell commented on IMPALA-10267:
----------------------------------------

The test failure has this output

query_test.test_scanners_fuzz.TestScannersFuzzing.test_fuzz_alltypes[protocol: 
beeswax | exec_option: \{'debug_action': 
'-1:OPEN:[email protected]', 'abort_on_error': False, 
'mem_limit': '512m', 'num_nodes': 0} | table_format: avro/none]
{noformat}
query_test/test_scanners_fuzz.py:82: in test_fuzz_alltypes
    self.run_fuzz_test(vector, src_db, table_name, unique_database, table_name)
query_test/test_scanners_fuzz.py:238: in run_fuzz_test
    result = self.execute_query(query, query_options = query_options)
common/impala_test_suite.py:811: in wrapper
    return function(*args, **kwargs)
common/impala_test_suite.py:843: in execute_query
    return self.__execute_query(self.client, query, query_options)
common/impala_test_suite.py:909: in __execute_query
    return impalad_client.execute(query, user=user)
common/impala_connection.py:205: in execute
    return self.__beeswax_client.execute(sql_stmt, user=user)
beeswax/impala_beeswax.py:187: in execute
    handle = self.__execute_query(query_string.strip(), user=user)
beeswax/impala_beeswax.py:365: in __execute_query
    self.wait_for_finished(handle)
beeswax/impala_beeswax.py:386: in wait_for_finished
    raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
E   ImpalaBeeswaxException: ImpalaBeeswaxException:
E    Query aborted:Failed due to unreachable impalad(s): 
impala-ec2-centos74-m5-4xlarge-ondemand-16d0.vpc.cloudera.com:27002
E   
E   Failed to parse file schema: Error parsing JSON: unable to decode byte 0xdf
E   Failed to parse file schema: Error parsing JSON: unable to decode byte 0xc1
E   Failed to parse file schema: Unknown Avro "type": bRolean
E   Failed to parse file schema: Error parsing JSON: ':' expected near '0'
E   Problem parsing file 
hdfs://localhost:20500/test-warehouse/test_fuzz_alltypes_a38caef9.db/alltypes/year=2010/month=5/000000_0
 at 5331(EOF) (1 of 5 similar)
E   Tried to read 19829 bytes but could only read 4691 bytes. This may indicate 
data file corruption. (file 
hdfs://localhost:20500/test-warehouse/test_fuzz_alltypes_a38caef9.db/alltypes/year=2010/month=5/000000_0,
 byte offset: 5331) (1 of 5 similar){noformat}

> Impala crashes in HdfsScanner::WriteTemplateTuples() with negative num_tuples
> -----------------------------------------------------------------------------
>
>                 Key: IMPALA-10267
>                 URL: https://issues.apache.org/jira/browse/IMPALA-10267
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Backend
>    Affects Versions: Impala 4.0
>            Reporter: Joe McDonnell
>            Priority: Critical
>              Labels: broken-build, flaky
>
> An exhaustive job hit two Impalad crashes with the following stack:
> {noformat}
>  2  impalad!google::LogMessageFatal::~LogMessageFatal() + 0x9
>     rbx = 0x0000000000000000   rbp = 0x00007f82f98ad7a0
>     rsp = 0x00007f82f98ad6a0   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x0000000005209129
>     Found by: call frame info
>  3  impalad!impala::HdfsScanner::WriteTemplateTuples(impala::TupleRow*, int) 
> [hdfs-scanner.cc : 235 + 0xf]
>     rbx = 0x0000000000000000   rbp = 0x00007f82f98ad7a0
>     rsp = 0x00007f82f98ad6b0   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x0000000002802013
>     Found by: call frame info
>  4  impalad!impala::HdfsAvroScanner::ProcessRange(impala::RowBatch*) 
> [hdfs-avro-scanner.cc : 553 + 0x19]
>     rbx = 0x0000000000000400   rbp = 0x00007f82f98adc60
>     rsp = 0x00007f82f98ad7b0   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x000000000283880d
>     Found by: call frame info
>  5  impalad!impala::BaseSequenceScanner::GetNextInternal(impala::RowBatch*) 
> [base-sequence-scanner.cc : 189 + 0x2b]
>     rbx = 0x0000000000000000   rbp = 0x00007f82f98adf40
>     rsp = 0x00007f82f98adc70   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x00000000029302b5
>     Found by: call frame info
>  6  impalad!impala::HdfsScanner::ProcessSplit() [hdfs-scanner.cc : 143 + 0x39]
>     rbx = 0x000000000292fbd4   rbp = 0x00007f82f98ae000
>     rsp = 0x00007f82f98adf50   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x00000000028011c9
>     Found by: call frame info
>  7  
> impalad!impala::HdfsScanNode::ProcessSplit(std::vector<impala::FilterContext, 
> std::allocator<impala::FilterContext> > const&, impala::MemPool*, 
> impala::io::ScanRange*, long*) [hdfs-scan-node.cc : 500 + 0x28]
>     rbx = 0x0000000000008000   rbp = 0x00007f82f98ae390
>     rsp = 0x00007f82f98ae010   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x000000000297aa3d
>     Found by: call frame info
>  8  impalad!impala::HdfsScanNode::ScannerThread(bool, long) 
> [hdfs-scan-node.cc : 418 + 0x27]
>     rbx = 0x00000001abc6a760   rbp = 0x00007f82f98ae750
>     rsp = 0x00007f82f98ae3a0   r12 = 0x0000000000000000
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x0000000002979dbe
>     Found by: call frame info
>  9  
> impalad!impala::HdfsScanNode::ThreadTokenAvailableCb(impala::ThreadResourcePool*)::{lambda()#1}::operator()()
>  const + 0x30
>     rbx = 0x0000000000000bbf   rbp = 0x00007f82f98ae770
>     rsp = 0x00007f82f98ae760   r12 = 0x0000000008e18f40
>     r13 = 0x00007f8306dd1690   r14 = 0x000000002f6631a0
>     r15 = 0x0000000072b8f2f0   rip = 0x0000000002979126
>     Found by: call frame info{noformat}
> This seems to happen when runningĀ 
> query_test/test_scanners_fuzz.py::TestScannersFuzzing::test_fuzz_alltypes on 
> Avro. In reading the code in HdfsAvroScanner ProcessScanRanger(), it seems 
> impossible for this value to be negative, so it's unclear what is happening.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to