hi all,
my spark version is spark1.2,and i use saveAsNewAPIHadoopFile for my job,but after execute many times ,it may be give me follow error one times i think we may be lost some operation like: add SparkHadoopUtil.get.addCredentials(hadoopConf) in saveAsHadoopDataset(conf: JobConf)(SPARK-1203 <https://spark-project.atlassian.net/browse/SPARK-1203>) in saveAsNewAPIHadoopDataset(conf: Configuration)

saveAsNewAPIHadoopDataset didn't have some operation like :SparkHadoopUtil.get.addCredentials(hadoopConf),so it cause some problem like SPARK-1203 <https://spark-project.atlassian.net/browse/SPARK-1203> described. there some ways i try to solve this problem,add some code in def saveAsNewAPIHadoopDataset(conf: Configuration) method in class PairRDDFunctions 1) val jobTaskContext = newTaskAttemptContext(wrappedConf.value, jobAttemptId) + jobTaskContext.getCredentials.mergeAll(UserGroupInformation.getCurrentUser.getCredentials)
2)val hadoopContext = newTaskAttemptContext(wrappedConf.value, attemptId)
+ hadoopContext.getCredentials.mergeAll(UserGroupInformation.getCurrentUser.getCredentials)
3)val jobFormat = outfmt.newInstance

  +  SparkHadoopUtil.get.addCredentials(new JobConf(hadoopConf))

for 2) i think,it didn't work,
and 1)and3) i didn't tried

anyOne can give me some advise for this problem,i will thanks a a lot

yuemeng


2015-04-21 20:12:16,606 [Thread_TimerScan_5-1-1-1-2] INFO org.apache.spark.scheduler.DAGScheduler - Job 9 failed: saveAsNewAPIHadoopFile at PublicationService.java:102, took 0.005417 s 2015-04-21 20:12:16,756 [Thread_TimerScan_5-1-1-1-1] INFO org.apache.spark.SparkContext - Starting job: saveAsNewAPIHadoopFile at JPRDDOperation.java:142 2015-04-21 20:12:16,762 [sparkDriver-akka.actor.default-dispatcher-13] WARN org.apache.spark.scheduler.DAGScheduler - Creating new stage failed due to exception - job: 10 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token can be issued only with kerberos or web authentication at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:6362) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:478) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:912) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1612)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy51.getDelegationToken(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:862)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy52.getDelegationToken(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:948) at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1377) at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:527) at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:505) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:242)



Reply via email to