[ 
https://issues.apache.org/jira/browse/SQOOP-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Szabo updated SQOOP-2894:
--------------------------------
    Fix Version/s:     (was: 1.4.7)
                   1.5.0

> Hive import with Parquet failed in Kerberos enabled cluster
> -----------------------------------------------------------
>
>                 Key: SQOOP-2894
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2894
>             Project: Sqoop
>          Issue Type: Bug
>          Components: hive-integration, tools
>    Affects Versions: 1.4.6
>         Environment: Redhat 6.6, Sqoop 1.4.6+Hadoop 2.7.2+Hive 1.2.1
>            Reporter: Ping Wang
>              Labels: security
>             Fix For: 1.5.0
>
>
> Importing data from external database to hive with Parquet option failed in 
> the kerberos environment. (It can success without kerberos). 

> The sqoop command I used:
> sqoop import --connect jdbc:db2://xxx:50000/testdb --username xxx --password 
> xxx --table users --hive-import -hive-table users3 --as-parquetfile -m 1
> The import job failed:

> ......
> 2016-02-26 04:20:07,020 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
> 2016-02-26 04:20:08,088 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config 
> null
> 2016-02-26 04:20:08,918 INFO [main] hive.metastore: Trying to connect to 
> metastore with URI thrift://xxx:9083
> 2016-02-26 04:30:09,207 WARN [main] hive.metastore: set_ugi() not successful, 
> Likely cause: new client talking to old server. Continuing without it.
> org.apache.thrift.transport.TTransportException: 
> java.net.SocketTimeoutException: Read timed out
>     at 
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>     at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>     at 
> org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
>     at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
>     at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>     at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:3688)
>     at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:3674)
>     at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:448)
>     at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:237)
>     at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:182)
>     at org.kitesdk.data.spi.hive.MetaStoreUtil.<init>(MetaStoreUtil.java:82)
>     at 
> org.kitesdk.data.spi.hive.HiveAbstractMetadataProvider.getMetaStoreUtil(HiveAbstractMetadataProvider.java:63)
>     at 
> org.kitesdk.data.spi.hive.HiveAbstractMetadataProvider.resolveNamespace(HiveAbstractMetadataProvider.java:270)
>     at 
> org.kitesdk.data.spi.hive.HiveAbstractMetadataProvider.resolveNamespace(HiveAbstractMetadataProvider.java:255)
>     at 
> org.kitesdk.data.spi.hive.HiveAbstractMetadataProvider.load(HiveAbstractMetadataProvider.java:102)
>     at 
> org.kitesdk.data.spi.filesystem.FileSystemDatasetRepository.load(FileSystemDatasetRepository.java:192)
>     at org.kitesdk.data.Datasets.load(Datasets.java:108)
>     at org.kitesdk.data.Datasets.load(Datasets.java:165)
>     at 
> org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.load(DatasetKeyOutputFormat.java:510)
>     at 
> org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.getOutputCommitter(DatasetKeyOutputFormat.java:473)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:476)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:458)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1560)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:458)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:377)
>     at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1518)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1515)
>     at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1448)
> ....... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to