[ https://issues.apache.org/jira/browse/TRAFODION-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308804#comment-16308804 ]
ASF GitHub Bot commented on TRAFODION-2854: ------------------------------------------- GitHub user selvaganesang opened a pull request: https://github.com/apache/trafodion/pull/1364 [TRAFODION-2854] Load encounter Operating system error 201 When load returns an error during insert, the row is reconstructed to log in the same format as the source. During the reconstruction process a wrong tuple format was used. You can merge this pull request into a Git repository by running: $ git pull https://github.com/selvaganesang/incubator-trafodion trafodion-2854 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/trafodion/pull/1364.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1364 ---- commit daaadefec5e20e2f39f7da72348978d5611cf5dd Author: selvaganesang <selva.govindarajan@...> Date: 2018-01-02T21:31:24Z [TRAFODION-2854] Load encounter Operating system error 201 When load returns an error during insert, the row is reconstructed to log in the same format as the source. During the reconstruction process a wrong tuple format was used. ---- > Load encounter Operating system error 201 > ----------------------------------------- > > Key: TRAFODION-2854 > URL: https://issues.apache.org/jira/browse/TRAFODION-2854 > Project: Apache Trafodion > Issue Type: Bug > Components: sql-exe > Reporter: Selvaganesan Govindarajan > Assignee: Selvaganesan Govindarajan > Fix For: 2.3 > > > Load data from hive data encounter 201 error, but using uspert using load the > error is different > >>load with log error rows to '/bulkload/logs' into > >>TRAFODION.ODS_SC.DM_FUNCTION_LOCATION select * from > >>hive.hive.DM_FUNCTION_LOCATION; > Task: LOAD Status: Started Object: TRAFODION.ODS_SC.DM_FUNCTION_LOCATION > Task: CLEANUP Status: Started Time: 2017-12-15 10:04:17.366 > Task: CLEANUP Status: Ended Time: 2017-12-15 10:04:17.385 > Task: CLEANUP Status: Ended Elapsed Time: 00:00:00.019 > Logging Location: > /bulkload/logs/ERR_TRAFODION.ODS_SC.DM_FUNCTION_LOCATION_20171215_020417 > Task: LOADING DATA Status: Started Time: 2017-12-15 10:04:17.385 > *** ERROR[2034] $Z000RN4:502: Operating system error 201 while communicating > with server process $Z0211QE:526. > *** ERROR[2034] $Z000RN4:502: Operating system error 201 while communicating > with server process $Z0211QE:526. > *** ERROR[2034] $Z000RN4:502: Operating system error 201 while communicating > with server process $Z0211QE:526. > SQL>upsert using load into TRAFODION.ODS_SC.DM_FUNCTION_LOCATION select * > from hive.hive.DM_FUNCTION_LOCATION; > *** ERROR[8411] A numeric overflow occurred during an arithmetic computation > or data conversion. Conversion of Source Type:CHAR(REC_BYTE_F_ASCII,15 > BYTES,ISO88591) Source Value:214040391595900 to Target Type:DECIMAL > SIGNED(REC_DECIMAL_LSE). [2017-12-15 13:46:59] > Additional Information Core file and stack trace are under > 10.10.22.152:/opt/GuangXi_20171218 > Account: root/linux > Step to analyze core: > [root@esggy-del-n002 GuangXi_20171218]# gdb > (gdb) source zgdb-GuangXi_20171218- > (gdb) file /opt/esgynDB223/export/bin64/tdm_arkesp > (gdb) core core.44989 > (gdb) bt > #0 0x00007f440e0ea1d7 in raise () from /opt/GuangXi_20171218/lib64/libc.so.6 > #1 0x00007f440e0eb8c8 in abort () from /opt/GuangXi_20171218/lib64/libc.so.6 > #2 0x00007f4410039f85 in os::abort(bool) () from > /opt/GuangXi_20171218/opt/jdk1.8.0_112/jre/lib/amd64/server/libjvm.so > #3 0x00007f44101dc383 in VMError::report_and_die() () from > /opt/GuangXi_20171218/opt/jdk1.8.0_112/jre/lib/amd64/server/libjvm.so > 0000004 0x00007f441003f48f in JVM_handle_linux_signal () from > /opt/GuangXi_20171218/opt/jdk1.8.0_112/jre/lib/amd64/server/libjvm.so > 0000005 0x00007f44100359d3 in signalHandler(int, siginfo*, void*) () > from /opt/GuangXi_20171218/opt/jdk1.8.0_112/jre/lib/amd64/server/libjvm.so > 0000006 <signal handler called> > 0000007 0x00007f440e1ffde0 in __memcpy_ssse3_back () from > /opt/GuangXi_20171218/lib64/libc.so.6 > 0000008 0x00007f441308aeaf in str_cpy_all (length=-7167, src=<error reading > variable: Cannot access memory at address 0xbc>, > tgt=0x7fff64f1ed60 "\200") at ../common/str.h:265 > 0000009 getLength (data=<error reading variable: Cannot access memory at > address 0xbc>, this=0x80) at ../exp/exp_attrs.h:415 > 0000010 ExHbaseAccessBulkLoadPrepSQTcb::createLoggingRow (this=<optimized > out>, tuppIndex=<optimized out>, tuppRow=0x7f43fe67e3d0 "", > targetRow=0x7f43fe68e401 "", targetRowLen=@0x7fff64f1eea0: 0) at > ../executor/ExHbaseIUD.cpp:1754 > 0000011 0x00007f441308ed1c in ExHbaseAccessBulkLoadPrepSQTcb::work > (this=0x7f43fe662f48) at ../executor/ExHbaseIUD.cpp:1591 > 0000012 0x00007f441309b46f in donotUpdateCounters (this=0x1343d9928) at > ../executor/ExStats.h:3729 > 0000013 ExScheduler::work (this=0x7f44141a2008, prevWaitTime=<optimized out>) > at ../executor/ExScheduler.cpp:296 > 0000014 0x00007f4412fc69ed in ExEspFragInstanceDir::fixupEntry (this=0x0, > handle=320060680, numOfParentInstances=1693577552, da=...) > at ../executor/ex_esp_frag_dir.cpp:331 > 0000015 0x000000000040ca26 in runESP (argc=argc@entry=3, > argv=argv@entry=0x7fff64f1f878, > guaReceiveFastStart=guaReceiveFastStart@entry=0x0) > at ../bin/ex_esp_main.cpp:404 > 0000016 0x000000000040bd9b in main (argc=3, argv=0x7fff64f1f878) at > ../bin/ex_esp_main.cpp:258 > (gdb) > This issue was observed at a customer installation that deployed EsgynDB -- This message was sent by Atlassian JIRA (v6.4.14#64029)