[ 
https://issues.apache.org/jira/browse/FALCON-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985242#comment-13985242
 ] 

Shwetha G S commented on FALCON-389:
------------------------------------

{quote}
The way we fixed this was we had to make sure all oozie servers that falcon 
talks to had the hadoop configs for all the hadoop servers falcon talks to.
{quote}
The export job is launched on target cluster by target oozie, which is fine. 
But the real issue is target export job has to connect to source hcat(which 
should be ok if the hcat versions are compatible) and will write to source 
hdfs(will this work with hdfs if the versions are same? webhdfs should work in 
all cases?). I didn't understand how does oozie having all hadoop confs will 
solve the issue here. Oozie doesn't even have to talk to other clusters in this 
case, its the hadoop job which has to. What am I missing?

{quote}
FALCON-401 is an easy fix, the namespace should include the target cluster name 
which is unique. Agreed that this is slightly inefficient but considering the 
fact that its not common to replicate from one source to multiple targets, 
simplicity wins.
{quote}
+1


> Submit Hcat export workflow to oozie on source cluster rather than to oozie 
> on destination cluster
> --------------------------------------------------------------------------------------------------
>
>                 Key: FALCON-389
>                 URL: https://issues.apache.org/jira/browse/FALCON-389
>             Project: Falcon
>          Issue Type: Improvement
>    Affects Versions: 0.4
>            Reporter: Arpit Gupta
>
> Noticed this on hadoop-2 with oozie 4.x that when you run an hcat replication 
> job where source and destination cluster's are different all jobs are 
> submitted to oozie on the destination cluster. Then oozie runs an table 
> export job that it submits to RM on cluster 1.
> Now if the oozie server on the target cluster is not running with all hadoop 
> configs it will not know all the appropriate hadoop configs and yarn job will 
> fail. We saw jobs fail with errors like
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Password not 
> found for ApplicationAttempt appattempt_1395965672651_0010_000002
> on unsecure cluster as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to