Hi Victor,

12/06/28 13:33:16 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
> "mapreduce.jobtracker.address" configuration value for LocalJobRunner :
> "hadooptest-01.mydomain:8021"
> 12/06/28 13:33:16 ERROR security.UserGroupInformation:
> PriviledgedActionException as:victor.sanchez (auth:SIMPLE)
> cause:java.io.IOException: Cannot initialize Cluster. Please check your
> configuration for mapreduce.framework.name and the correspond server
> addresses.
> 12/06/28 13:33:16 ERROR tool.ImportTool: Encountered IOException running
> import job: java.io.IOException: Cannot initialize Cluster. Please check
> your configuration for mapreduce.framework.name and the correspond server
> addresses.


The exception is thrown because sqoop is assuming local mode while hadoop
is configured in cluster mode. Provided that you're running hadoop in
cluster mode, the question is now why sqoop assumes local mode.

Would you mind providing the content of the following config files that CM4
generated for you in /etc/hadoop/conf?

   - core-site.xml
   - hdfs-site.xml
   - mapred-site.xml

The apache mailing list strips off any attached files, so you will have to
copy-and-paste them in an email.

Thanks a lot,
Cheolsoo

On Thu, Jun 28, 2012 at 2:49 AM, Victor Sanchez
<[email protected]>wrote:

>  Hi Cheolsoo,****
>
> ** **
>
> Well as you mention there was
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory inside
> /etc/sqoop/conf/managers.d/****
>
> ** **
>
> I removed and I now I can actually connect and list the tables but ….****
>
> $  sqoop list-tables --connect
> 'jdbc:sqlserver://hadooptest01;username=victor;password=victor;database=hadoopSQL01'
> ****
>
> 12/06/28 13:33:38 INFO manager.SqlManager: Using default fetchSize of 1000
> ****
>
> Table1****
>
> Table2****
>
> Table3****
>
> …****
>
> ** **
>
> If I try to import I ran into another issue. ****
>
> $ sqoop import --connect
> 'jdbc:sqlserver://hadooptest01;username=victor;password=victor;database=hadoopSQL01'
> --table Table1 --target-dir /test/Table1****
>
> 12/06/28 13:33:07 INFO manager.SqlManager: Using default fetchSize of 1000
> ****
>
> 12/06/28 13:33:07 INFO tool.CodeGenTool: Beginning code generation****
>
> 12/06/28 13:33:08 INFO manager.SqlManager: Executing SQL statement: SELECT
> t.* FROM Table1 AS t WHERE 1=0****
>
> 12/06/28 13:33:08 INFO orm.CompilationManager: HADOOP_HOME is
> /usr/lib/hadoop****
>
> Note: /tmp/sqoop-victor.sanchez/compile/5567c0bfbd9fd8af0ab8b0715c2245d3/
> Table1.java uses or overrides a deprecated API.****
>
> Note: Recompile with -Xlint:deprecation for details.****
>
> 12/06/28 13:33:12 INFO orm.CompilationManager: Writing jar file:
> /tmp/sqoop-victor.sanchez/compile/5567c0bfbd9fd8af0ab8b0715c2245d3/Table1.jar
> ****
>
> 12/06/28 13:33:13 INFO mapreduce.ImportJobBase: Beginning import of Table1
> ****
>
> 12/06/28 13:33:13 WARN conf.Configuration: mapred.job.tracker is
> deprecated. Instead, use mapreduce.jobtracker.address****
>
> 12/06/28 13:33:14 WARN conf.Configuration: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar****
>
> 12/06/28 13:33:16 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps****
>
> 12/06/28 13:33:16 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
> "mapreduce.jobtracker.address" configuration value for LocalJobRunner :
> "hadooptest-01.mydomain:8021"****
>
> 12/06/28 13:33:16 ERROR security.UserGroupInformation:
> PriviledgedActionException as:victor.sanchez (auth:SIMPLE)
> cause:java.io.IOException: Cannot initialize Cluster. Please check your
> configuration for mapreduce.framework.name and the correspond server
> addresses.****
>
> 12/06/28 13:33:16 ERROR tool.ImportTool: Encountered IOException running
> import job: java.io.IOException: Cannot initialize Cluster. Please check
> your configuration for mapreduce.framework.name and the correspond server
> addresses.****
>
>         at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
> ****
>
>         at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)****
>
>         at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)****
>
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1196)****
>
>         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1192)****
>
>         at java.security.AccessController.doPrivileged(Native Method)****
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> ****
>
>         at org.apache.hadoop.mapreduce.Job.connect(Job.java:1191)****
>
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1220)****
>
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1244)
> ****
>
>         at
> org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)***
> *
>
>         at
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:202)
> ****
>
>         at
> org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:465)****
>
>         at
> org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)****
>
>         at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)****
>
>         at org.apache.sqoop.Sqoop.run(Sqoop.java:145)****
>
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)****
>
>         at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)****
>
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)****
>
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)****
>
>         at org.apache.sqoop.Sqoop.main(Sqoop.java:238)****
>
>         at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)****
>
> ** **
>
> I had checked that all the services are running (Job tracker is up and I
> actually see history of jobs triggered from hue) and my user has all the
> permissions to write in the hfds directory that is been used as a target.*
> ***
>
> Using google for a while I figure out that there might be another bug
> going on (but this time with mapreduce using YARN but I’m using MRv1 not
> YARN).  ****
>
> ** **
>
> If you have any tip for fixing this it will be highly appreciated!****
>
> ** **
>
> /Victor****
>
> ** **
>
> *From:* Cheolsoo Park [mailto:[email protected]]
> *Sent:* den 27 juni 2012 18:39
> *To:* [email protected]
> *Subject:* Re: Sqoop 1.4.2 checkout from trunk (installation problem)
> -sqoop 1.4.1 incompatible with MSSQL Server Connector****
>
> ** **
>
> Hi Victor,****
>
> ** **
>
> I was able to reproduce your error having the following connector file in
> the manager.d dir (/etc/sqoop/conf/managers.d):****
>
> ** **
>
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory****
>
> ** **
>
> Can you please double-check if you have any file that doesn't contain
> key-value pairs in the manager.d directory? If you do, that should be the
> problem. ****
>
> ** **
>
> Thanks,****
>
> Cheolsoo****
>
> ** **
>
> ** **
>
> On Wed, Jun 27, 2012 at 9:01 AM, Cheolsoo Park <[email protected]>
> wrote:****
>
> Hi Victor,****
>
> ** **
>
>         at
> org.apache.sqoop.ConnFactory.addManagersFromFile(ConnFactory.java:152)****
>
>  ** **
>
> I suspect that the error that you're seeing is a regression of
> SQOOP-505. Can you please what the content of your connector file looks
> like? For example,****
>
> ** **
>
>
> com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory=/usr/lib/sqoop/lib/sqoop-sqlserver-1.0.jar
> ****
>
> ** **
>
> Thanks,****
>
> Cheolsoo****
>
> ** **
>
> On Wed, Jun 27, 2012 at 8:32 AM, Victor Sanchez <[email protected]>
> wrote:****
>
> Hi,****
>
>  ****
>
> I have a test cluster that runs RHEL6. I installed Cloudera Manager 4
> (which includes CDH4). I had installed SQOOP.****
>
>  ****
>
> # sqoop version****
>
> *Sqoop 1.4.1-cdh4.0.0*****
>
> git commit id 44ef1bef07d93e3fcf79bdc1150de6c278ad7845****
>
> Compiled by jenkins on Mon Jun  4 17:43:14 PDT 2012****
>
>  ****
>
> After all the installation configuration and stuff I ran into the problem
> on not been able to sqoop import. I figured out that there is a bug for MS
> SQL Connector for SQL Server 2008 R2 (
> https://issues.apache.org/jira/browse/SQOOP-480).****
>
>  ****
>
> So I checkout the code ****
>
>  ****
>
> 'svn co https://svn.apache.org/repos/asf/sqoop/trunk/ sqoop'****
>
>  ****
>
> And I build a project by executing ant. I got as a result (inside the
> build folder) 2 jar files****
>
>  ****
>
> *sqoop-1.4.2-incubating-SNAPSHOT.jar*****
>
> *sqoop-test-1.4.2-incubating-SNAPSHOT.jar*****
>
>  ****
>
> After all this I used this files for replacing the files in the instance
> with the sqoop installation. ****
>
> So I removed the jar files in /usr/lib/sqoop/  (sqoop-1.4.1-cdh4.0.0.jar
> and sqoop-test-1.4.1-cdh4.0.0.jar) replacing them with the files above.***
> *
>
>  ****
>
> After that I get****
>
> # sqoop version****
>
> *Sqoop 1.4.2-incubating-SNAPSHOT*****
>
> git commit id****
>
> Compiled by victor.sanchez on Wed Jun 27 10:33:01 EDT 2012****
>
>  ****
>
> But when I tried to run the list-tables … it fails like this:****
>
>  ****
>
> # sqoop list-tables --connect
> 'jdbc:sqlserver://hadooptest01;username=victor;password=victor;database=hadoopDB_SQL'
> ****
>
> *12/06/27 16:18:29 ERROR tool.BaseSqoopTool: Got error creating database
> manager: java.lang.StringIndexOutOfBoundsException: String index out of
> range: -1*****
>
>         at java.lang.String.substring(String.java:1937)****
>
>         at
> org.apache.sqoop.ConnFactory.addManagersFromFile(ConnFactory.java:152)****
>
>         at
> org.apache.sqoop.ConnFactory.loadManagersFromConfDir(ConnFactory.java:224)
> ****
>
>         at
> org.apache.sqoop.ConnFactory.instantiateFactories(ConnFactory.java:83)****
>
>         at org.apache.sqoop.ConnFactory.<init>(ConnFactory.java:60)****
>
>         at com.cloudera.sqoop.ConnFactory.<init>(ConnFactory.java:36)****
>
>         at org.apache.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:200)
> ****
>
>         at org.apache.sqoop.tool.ListTablesTool.run(ListTablesTool.java:44)
> ****
>
>         at org.apache.sqoop.Sqoop.run(Sqoop.java:145)****
>
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)****
>
>         at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)****
>
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)****
>
>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)****
>
>         at org.apache.sqoop.Sqoop.main(Sqoop.java:238)****
>
>         at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)****
>
>  ****
>
>  ****
>
> Notice if I put back the “old” jar files sqoop list-tables works, but of
> course the incompatibility bug (
> https://issues.apache.org/jira/browse/SQOOP-480) is still there.****
>
>  ****
>
> If anyone has an idea of how to update my current sqoop installation with
> my manual build I will appreciate any tip.****
>
>  ****
>
> Thanks in advance!****
>
>  ****
>
> /Victor****
>
> ** **
> Victor Sanchez****
>
> Database Architect****
>
> Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
> T: , M: 076 000 7297, F: +46 8 578 545 10
> [email protected] www.netent.com ****
>
> *Better Games*
>
> ** **
>
> This email and the information it contains are confidential and may be
> legally privileged and intended solely for the use of the individual or
> entity to whom they are addressed. If you have received this email in error
> please notify me immediately. Please note that any views or opinions
> presented in this email are solely those of the author and do not
> necessarily represent those of the company. You should not copy it for any
> purpose, or disclose its contents to any other person. Internet
> communications are not secure and, therefore, Net Entertainment does not
> accept legal responsibility for the contents of this message as it has been
> transmitted over a public network. If you suspect the message may have been
> intercepted or amended please call me. Finally, the recipient should check
> this email and any attachments for the presence of viruses. The company
> accepts no liability for any damage caused by any virus transmitted by this
> email. Thank you. ****
>
> ** **
>
> ** **
>

Reply via email to