[
https://issues.apache.org/jira/browse/SQOOP-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15824235#comment-15824235
]
Antonio Villar commented on SQOOP-3106:
---------------------------------------
The structure of the original destiny table is this:
CREATE TABLE "ECI_ORA"."CARTERA_XPAN_PROD_EXT_15"
( "TIINDI" NUMBER(38,0),
"CODINDI" NUMBER(38,0),
"EMPIMP" NUMBER(38,0),
"OFIIMP" NUMBER(38,0),
"EMPEMI" NUMBER(38,0),
"TICOGE" NUMBER(38,0),
"CONCSIG" NUMBER(38,0),
"CDCREP" NUMBER(38,0),
"EMPDIV" NUMBER(38,0),
"CODDIV" NUMBER(38,0),
"CODCARTE" NUMBER(38,0),
"EMPGEST" NUMBER(38,0),
"CDSEGCLI" NUMBER(38,0),
"TIPCAR" NUMBER(38,0),
"NUESTCE" NUMBER(38,0),
"TIDADCE" NUMBER(38,0),
"EQUIPO" NUMBER(38,0),
"CDSEGBAN" NUMBER(38,0),
"CODPROD" NUMBER(38,0),
"INDVALO101" VARCHAR2(38 CHAR),
"INDVALO102" VARCHAR2(38 CHAR),
"INDVALO103" VARCHAR2(38 CHAR),
"INDVALO104" VARCHAR2(38 CHAR),
"INDVALO105" VARCHAR2(38 CHAR),
"INDVALO106" VARCHAR2(38 CHAR),
"INDVALO107" VARCHAR2(38 CHAR),
"INDVALO108" VARCHAR2(38 CHAR),
"INDVALO109" VARCHAR2(38 CHAR),
"INDVALO110" VARCHAR2(38 CHAR),
"INDVALO111" VARCHAR2(38 CHAR),
"INDVALO112" VARCHAR2(38 CHAR),
"INDVALO113" VARCHAR2(38 CHAR),
"INDVALO114" VARCHAR2(38 CHAR),
"INDVALO115" VARCHAR2(38 CHAR),
"INDVALO116" VARCHAR2(38 CHAR),
"INDVALO117" VARCHAR2(38 CHAR),
"INDVALO118" VARCHAR2(38 CHAR),
"INDVALO119" VARCHAR2(38 CHAR),
"INDVALO201" VARCHAR2(38 CHAR),
"INDVALO202" VARCHAR2(38 CHAR),
"INDVALO203" VARCHAR2(38 CHAR),
"INDVALO204" VARCHAR2(38 CHAR),
"INDVALO205" VARCHAR2(38 CHAR),
"INDVALO206" VARCHAR2(38 CHAR),
"INDVALO207" VARCHAR2(38 CHAR),
"INDVALO208" VARCHAR2(38 CHAR),
"INDVALO209" VARCHAR2(38 CHAR),
"INDVALO210" VARCHAR2(38 CHAR),
"INDVALO211" VARCHAR2(38 CHAR),
"INDVALO212" VARCHAR2(38 CHAR),
"INDVALO213" VARCHAR2(38 CHAR),
"INDVALO214" VARCHAR2(38 CHAR),
"INDVALO215" VARCHAR2(38 CHAR),
"INDVALO216" VARCHAR2(38 CHAR),
"INDVALO217" VARCHAR2(38 CHAR),
"INDVALO218" VARCHAR2(38 CHAR),
"INDVALO219" VARCHAR2(38 CHAR),
"NIVEL" NUMBER(38,0)
) SEGMENT CREATION IMMEDIATE
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
COMPRESS BASIC NOLOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "ECI_ORA_DAT" ;
If i change the types of the oracle table from number(38,0) to varchar(38 char)
the sqoop process runs well.
So seems to be a problem with the manage of number values.
> Error with append_values hint and large tables
> ----------------------------------------------
>
> Key: SQOOP-3106
> URL: https://issues.apache.org/jira/browse/SQOOP-3106
> Project: Sqoop
> Issue Type: Bug
> Components: connectors/oracle
> Affects Versions: 1.4.5
> Reporter: Antonio Villar
> Labels: append_values, export, oracle
>
> I'm using Oracle Data Connector for Oracle and Hadoop to export data from
> Hive to Oracle, when i use the append_values hint i get a Exception and
> mappers don't end.
> This only happens when i export more than 18 columns of the original table.
> The command that i'm using is:
> {code}
> sqoop export \
> -D oraoop.partitioned=true -D
> oraoop.template.table=ECI_ORA.CARTERA_XPAN_PROD_EXT_15 \
> -D oraoop.oracle.append.values.hint.usage=ON \
> -connect
> jdbc:oracle:thin:ECI_ORA/[email protected]:1522/s_dpedwr_datapool.lacaixa.es
> \
> --num-mappers 4 \
> --table ECI_ORA.AAA_1GIGA \
> --columns
> "TIINDI,CODINDI,EMPIMP,OFIIMP,EMPEMI,TICOGE,CONCSIG,CDCREP,EMPDIV,CODDIV,CODCARTE,EMPGEST,CDSEGCLI,TIPCAR,NUESTCE,TIDADCE,EQUIPO,CDSEGBAN,CODPROD,INDVALO101,INDVALO102,INDVALO103,INDVALO104,INDVALO105,INDVALO106,INDVALO107,INDVALO108,INDVALO109,INDVALO110,INDVALO111,INDVALO112,INDVALO113,INDVALO114,INDVALO115,INDVALO116,INDVALO117,INDVALO118,INDVALO119,INDVALO201,INDVALO202,INDVALO203,INDVALO204,INDVALO205,INDVALO206,INDVALO207,INDVALO208,INDVALO209,INDVALO210,INDVALO211,INDVALO212,INDVALO213,INDVALO214,INDVALO215,INDVALO216,INDVALO217,INDVALO218,INDVALO219,NIVEL"
> \
> --export-dir /apps/eciora/hive/l04_eciora.db/cartera_xpan_prod_ext_1 \
> --input-fields-terminated-by '\01' \
> --input-lines-terminated-by '\n' \
> --input-null-non-string '\\\\N' \
> --input-null-string '\\\\N' \
> --direct
> {code}
> This is the log of the mapper:
> {code}
> 2016-12-29 11:39:36,886 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2016-12-29 11:39:36,987 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 2016-12-29 11:39:36,987 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system
> started
> 2016-12-29 11:39:36,998 INFO [main] org.apache.hadoop.mapred.YarnChild:
> Executing with tokens:
> 2016-12-29 11:39:36,998 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind:
> mapreduce.job, Service: job_1481807148415_6344, Ident:
> (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@6e171cd7)
> 2016-12-29 11:39:37,079 INFO [main] org.apache.hadoop.mapred.YarnChild:
> Sleeping for 0ms before retrying again. Got null now.
> 2016-12-29 11:39:37,364 INFO [main] org.apache.hadoop.mapred.YarnChild:
> mapreduce.cluster.local.dir for child:
> /u12/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u11/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u10/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u09/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u08/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u07/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u06/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u05/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u04/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u03/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u02/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344,/u01/hadoop/yarn/nm/usercache/Ieciora1/appcache/application_1481807148415_6344
> 2016-12-29 11:39:37,645 INFO [main]
> org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated.
> Instead, use dfs.metrics.session-id
> 2016-12-29 11:39:38,182 INFO [main] org.apache.hadoop.mapred.Task: Using
> ResourceCalculatorProcessTree : [ ]
> 2016-12-29 11:39:38,528 INFO [main] org.apache.hadoop.mapred.MapTask:
> Processing split: org.apache.sqoop.mapreduce.hcat.SqoopHCatInputSplit@62f68dff
> 2016-12-29 11:39:38,572 WARN [main]
> org.apache.sqoop.manager.oracle.OraOopUtilities: System property
> java.security.egd is not set to file:///dev/urandom - Oracle connections may
> time out.
> 2016-12-29 11:39:38,575 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: The number of
> batch-inserts to perform per commit has been changed from 0 to 1. This is in
> response to the Oracle APPEND_VALUES hint being used.
> 2016-12-29 11:39:38,576 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: The number of rows
> per batch-insert has been changed from 0 to 5000. This is in response to the
> Oracle APPEND_VALUES hint being used.
> 2016-12-29 11:39:39,142 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: The number of rows
> per batch is: 5000
> 2016-12-29 11:39:39,142 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: The number of batches
> per commit is: 1
> 2016-12-29 11:39:39,341 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: This record writer is
> connected to Oracle via the JDBC URL:
> "oracle.jdbc.driver.T4CConnection@72a85671"
> to the Oracle instance: "dpdb1rb1"
> 2016-12-29 11:39:39,348 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOracleQueries: Session Time Zone set to
> GMT
> 2016-12-29 11:39:39,350 INFO [main]
> org.apache.sqoop.manager.oracle.OracleConnectionFactory: Initializing Oracle
> session with SQL :
> begin
> dbms_application_info.set_module(module_name => 'Data Connector for Oracle
> and Hadoop', action_name => 'export 20161229113914CET');
> end;
> 2016-12-29 11:39:39,351 INFO [main]
> org.apache.sqoop.manager.oracle.OracleConnectionFactory: Initializing Oracle
> session with SQL : alter session force parallel dml
> 2016-12-29 11:39:39,351 INFO [main]
> org.apache.sqoop.manager.oracle.OracleConnectionFactory: Initializing Oracle
> session with SQL : alter session disable parallel query
> 2016-12-29 11:39:39,500 INFO [main]
> org.apache.hive.hcatalog.mapreduce.InternalUtil: Initializing
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe with properties
> {name=l04_eciora.cartera_xpan_prod_ext_10000_rows, numFiles=1,
> columns.types=int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),decimal(15,2),int,
> serialization.format=1,
> columns=tiindi,codindi,empimp,ofiimp,empemi,ticoge,concsig,cdcrep,empdiv,coddiv,codcarte,empgest,cdsegcli,tipcar,nuestce,tidadce,equipo,cdsegban,codprod,indvalo101,indvalo102,indvalo103,indvalo104,indvalo105,indvalo106,indvalo107,indvalo108,indvalo109,indvalo110,indvalo111,indvalo112,indvalo113,indvalo114,indvalo115,indvalo116,indvalo117,indvalo118,indvalo119,indvalo201,indvalo202,indvalo203,indvalo204,indvalo205,indvalo206,indvalo207,indvalo208,indvalo209,indvalo210,indvalo211,indvalo212,indvalo213,indvalo214,indvalo215,indvalo216,indvalo217,indvalo218,indvalo219,nivel,
> rawDataSize=1571259, numRows=10000,
> serialization.lib=org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,
> COLUMN_STATS_ACCURATE=true, totalSize=1581259, serialization.null.format=\N,
> transient_lastDdlTime=1482491985}
> 2016-12-29 11:39:40,299 INFO [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: Batch-Mode insert
> statement:
> insert /*+APPEND_VALUES*/ into "ECI_ORA"."AAA_1GIGA"
> (TIINDI
> ...
> ,NIVEL)
> values
> (:TIINDI
> ...
> ,:NIVEL)
> 2016-12-29 11:39:40,299 DEBUG [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: Prepared Statement
> SQL:
> insert /*+APPEND_VALUES*/ into "ECI_ORA"."AAA_1GIGA"
> (TIINDI
> ….
> ,:NIVEL)
> 2016-12-29 11:39:41,279 DEBUG [main]
> org.apache.sqoop.manager.oracle.OraOopOutputFormatBase: Prepared Statement
> SQL:
> insert /*+APPEND_VALUES*/ into "ECI_ORA"."AAA_1GIGA"
> (TIINDI
> ….
> ,:INDVALO219
> ,:NIVEL)
> 2016-12-29 11:39:41,679 ERROR [Thread-11]
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread
> Thread[Thread-11,5,main] threw an Exception.
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at
> oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12208)
> at
> oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
> at
> org.apache.sqoop.mapreduce.AsyncSqlOutputFormat$AsyncSqlExecThread.run(AsyncSqlOutputFormat.java:231)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)