[
https://issues.apache.org/jira/browse/MAPREDUCE-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran resolved MAPREDUCE-7254.
---------------------------------------
Resolution: Not A Bug
> sqoop on hadoop3.1 doesn't work
> -------------------------------
>
> Key: MAPREDUCE-7254
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7254
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: job submission
> Affects Versions: 3.1.0
> Reporter: Youquan Lin
> Priority: Major
>
> I have a mysql table called admin with "linyouquan". And I want to import
> data from mysql to local filesystem. I chose sqoop for this thing. sqoop on
> hadoop2.6 works, but sqoop on hadoop3.1 doesn't work.
> The following is my operation and error information of sqoop on hadoop3.1.
> # my operation
> {code:java}
> export HADOOP_HOME=/home/linyouquan/hadoop3-hadoop-pack
> {code}
> {code:java}
> sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" --connect
> jdbc:mysql://dbHost:dbPort/dbName --username dbUser --password dbPasswd
> --table admin --target-dir sqoop_import_user
> {code}
> # error information of sqoop on hadoop3.1
> {code:java}
> Warning:
> /home/linyouquan/yarn/YARN/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../hbase does
> not exist! HBase imports will fail.
> Please set $HBASE_HOME to the root of your HBase installation.
> Warning:
> /home/linyouquan/yarn/YARN/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../hcatalog
> does not exist! HCatalog jobs will fail.
> Please set $HCAT_HOME to the root of your HCatalog installation.
> Warning:
> /home/linyouquan/yarn/YARN/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../accumulo
> does not exist! Accumulo imports will fail.
> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
> Warning:
> /home/linyouquan/yarn/YARN/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../../zookeeper
> does not exist! Accumulo imports will fail.
> Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
> 2019-12-24 18:29:55,473 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
> 2019-12-24 18:29:55,509 WARN tool.BaseSqoopTool: Setting your password on the
> command-line is insecure. Consider using -P instead.
> 2019-12-24 18:29:55,623 INFO manager.MySQLManager: Preparing to use a MySQL
> streaming resultset.
> 2019-12-24 18:29:55,623 INFO tool.CodeGenTool: Beginning code generation
> 2019-12-24 18:29:56,087 INFO manager.SqlManager: Executing SQL statement:
> SELECT t.* FROM `admin` AS t LIMIT 1
> 2019-12-24 18:29:56,110 INFO manager.SqlManager: Executing SQL statement:
> SELECT t.* FROM `admin` AS t LIMIT 1
> 2019-12-24 18:29:56,116 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is
> /home/linyouquan/hadoop3-hadoop-pack
> Note:
> /tmp/sqoop-linyouquan/compile/cf7897369f7ede4babaf39adfd2e55aa/admin.java
> uses or overrides a deprecated API.
> Note: Recompile with -Xlint:deprecation for details.
> 2019-12-24 18:29:57,451 INFO orm.CompilationManager: Writing jar file:
> /tmp/sqoop-linyouquan/compile/cf7897369f7ede4babaf39adfd2e55aa/admin.jar
> 2019-12-24 18:29:57,464 WARN manager.MySQLManager: It looks like you are
> importing from mysql.
> 2019-12-24 18:29:57,464 WARN manager.MySQLManager: This transfer can be
> faster! Use the --direct
> 2019-12-24 18:29:57,464 WARN manager.MySQLManager: option to exercise a
> MySQL-specific fast path.
> 2019-12-24 18:29:57,464 INFO manager.MySQLManager: Setting zero DATETIME
> behavior to convertToNull (mysql)
> 2019-12-24 18:29:57,468 INFO mapreduce.ImportJobBase: Beginning import of
> admin
> 2019-12-24 18:29:57,469 INFO Configuration.deprecation: mapred.job.tracker is
> deprecated. Instead, use mapreduce.jobtracker.address
> 2019-12-24 18:29:57,639 INFO Configuration.deprecation: mapred.jar is
> deprecated. Instead, use mapreduce.job.jar
> 2019-12-24 18:29:57,770 INFO Configuration.deprecation: mapred.map.tasks is
> deprecated. Instead, use mapreduce.job.maps
> 2019-12-24 18:29:58,051 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2019-12-24 18:29:58,117 INFO impl.MetricsSystemImpl: Scheduled Metric
> snapshot period at 10 second(s).
> 2019-12-24 18:29:58,117 INFO impl.MetricsSystemImpl: JobTracker metrics
> system started
> 2019-12-24 18:29:58,244 INFO db.DBInputFormat: Using read commited
> transaction isolation
> 2019-12-24 18:29:58,260 INFO mapreduce.JobSubmitter: number of splits:1
> 2019-12-24 18:29:58,378 INFO mapreduce.JobSubmitter: Submitting tokens for
> job: job_local1105755779_0001
> 2019-12-24 18:29:58,380 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 2019-12-24 18:29:58,599 INFO mapred.LocalDistributedCacheManager: Creating
> symlink: /tmp/hadoop-linyouquan/mapred/local/1577183398474/libjars <-
> /home/linyouquan/yarn/YARN/libjars/*
> 2019-12-24 18:29:58,601 WARN fs.FileUtil: Command 'ln -s
> /tmp/hadoop-linyouquan/mapred/local/1577183398474/libjars
> /home/linyouquan/yarn/YARN/libjars/*' failed 1 with: ln: creating symbolic
> link `/home/linyouquan/yarn/YARN/libjars/*': No such file or
> directory2019-12-24 18:29:58,601 WARN mapred.LocalDistributedCacheManager:
> Failed to create symlink:
> /tmp/hadoop-linyouquan/mapred/local/1577183398474/libjars <-
> /home/linyouquan/yarn/YARN/libjars/*
> 2019-12-24 18:29:58,602 INFO mapred.LocalDistributedCacheManager: Localized
> file:/tmp/hadoop/mapred/staging/linyouquan1105755779/.staging/job_local1105755779_0001/libjars
> as file:/tmp/hadoop-linyouquan/mapred/local/1577183398474/libjars
> 2019-12-24 18:29:58,673 INFO mapreduce.Job: The url to track the job:
> http://localhost:8080/
> 2019-12-24 18:29:58,674 INFO mapreduce.Job: Running job:
> job_local1105755779_0001
> 2019-12-24 18:29:58,674 INFO mapred.LocalJobRunner: OutputCommitter set in
> config null
> 2019-12-24 18:29:58,682 INFO output.FileOutputCommitter: File Output
> Committer Algorithm version is 2
> 2019-12-24 18:29:58,682 INFO output.FileOutputCommitter: FileOutputCommitter
> skip cleanup _temporary folders under output directory:false, ignore cleanup
> failures: false
> 2019-12-24 18:29:58,682 INFO mapred.LocalJobRunner: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 2019-12-24 18:29:58,701 INFO mapred.LocalJobRunner: Waiting for map tasks
> 2019-12-24 18:29:58,701 INFO mapred.LocalJobRunner: Starting task:
> attempt_local1105755779_0001_m_000000_0
> 2019-12-24 18:29:58,725 INFO output.FileOutputCommitter: File Output
> Committer Algorithm version is 2
> 2019-12-24 18:29:58,725 INFO output.FileOutputCommitter: FileOutputCommitter
> skip cleanup _temporary folders under output directory:false, ignore cleanup
> failures: false
> 2019-12-24 18:29:58,739 INFO mapred.Task: Using
> ResourceCalculatorProcessTree : [ ]
> 2019-12-24 18:29:58,744 INFO db.DBInputFormat: Using read commited
> transaction isolation
> 2019-12-24 18:29:58,748 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
> 2019-12-24 18:29:58,754 INFO mapred.LocalJobRunner: map task executor
> complete.
> 2019-12-24 18:29:58,755 WARN mapred.LocalJobRunner: job_local1105755779_0001
> java.lang.Exception: java.lang.RuntimeException:
> java.lang.ClassNotFoundException: Class admin not found
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException:
> Class admin not found
> at
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2597)
> at
> org.apache.sqoop.mapreduce.db.DBConfiguration.getInputClass(DBConfiguration.java:403)
> at
> org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.createDBRecordReader(DataDrivenDBInputFormat.java:270)
> at
> org.apache.sqoop.mapreduce.db.DBInputFormat.createRecordReader(DBInputFormat.java:266)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:534)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:777)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:348)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: Class admin not found
> at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2501)
> at
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2595)
> ... 12 more
> 2019-12-24 18:29:59,678 INFO mapreduce.Job: Job job_local1105755779_0001
> running in uber mode : false
> 2019-12-24 18:29:59,679 INFO mapreduce.Job: map 0% reduce 0%
> 2019-12-24 18:29:59,682 INFO mapreduce.Job: Job job_local1105755779_0001
> failed with state FAILED due to: NA
> 2019-12-24 18:29:59,689 INFO mapreduce.Job: Counters: 0
> 2019-12-24 18:29:59,693 WARN mapreduce.Counters: Group FileSystemCounters is
> deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
> 2019-12-24 18:29:59,695 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
> 1.913 seconds (0 bytes/sec)
> 2019-12-24 18:29:59,695 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
> 2019-12-24 18:29:59,695 INFO mapreduce.ImportJobBase: Retrieved 0 records.
> 2019-12-24 18:29:59,695 ERROR tool.ImportTool: Import failed: Import job
> failed!
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]