[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-21 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-14207:
---
Fix Version/s: 3.1.3

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-21 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-14207:
---
Fix Version/s: 3.2.1

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-21 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-14207:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-15 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14207:
---
Attachment: HDFS-14207.001.patch

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-15 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14207:
---
Attachment: HDFS-14207.001.patch

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-15 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14207:
---
Attachment: (was: HDFS-14207.001.patch)

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14207) ZKFC should catch exception when ha configuration missing

2019-01-15 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14207:
---
Status: Patch Available  (was: Open)

> ZKFC should catch exception when ha configuration missing
> -
>
> Key: HDFS-14207
> URL: https://issues.apache.org/jira/browse/HDFS-14207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.3, 3.1.1
>Reporter: Fei Hui
>Priority: Major
> Attachments: HDFS-14207.001.patch
>
>
> When i test hdfs zkfc with wrong configurations , i can not start zkfc 
> process, and did not find any errors in log except command errors as bellow
> {quote}
> ERROR: Cannot set priority of zkfc process 59556
> {quote}
> Debug zkfc and deep into the code, i find that zkfc exit because of 
> HadoopIllegalArgumentException. I think we should catch this exception and 
> log it.
> Throwing HadoopIllegalArgumentException code is as follow
> {code:java}
>   public static DFSZKFailoverController create(Configuration conf) {
> Configuration localNNConf = DFSHAAdmin.addSecurityConfiguration(conf);
> String nsId = DFSUtil.getNamenodeNameServiceId(conf);
> if (!HAUtil.isHAEnabled(localNNConf, nsId)) {
>   throw new HadoopIllegalArgumentException(
>   "HA is not enabled for this namenode.");
> }
> String nnId = HAUtil.getNameNodeId(localNNConf, nsId);
> if (nnId == null) {
>   String msg = "Could not get the namenode ID of this node. " +
>   "You may run zkfc on the node other than namenode.";
>   throw new HadoopIllegalArgumentException(msg);
> }
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> DFSUtil.setGenericConf(localNNConf, nsId, nnId, ZKFC_CONF_KEYS);
> 
> NNHAServiceTarget localTarget = new NNHAServiceTarget(
> localNNConf, nsId, nnId);
> return new DFSZKFailoverController(localNNConf, localTarget);
>   }
> {code}
> In DFSZKFailoverController main function, we do not catch it and not log it
> {code:java}
>  public static void main(String args[])
>   throws Exception {
> StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
> args, LOG);
> if (DFSUtil.parseHelpArgument(args, 
> ZKFailoverController.USAGE, System.out, true)) {
>   System.exit(0);
> }
> 
> GenericOptionsParser parser = new GenericOptionsParser(
> new HdfsConfiguration(), args);
> DFSZKFailoverController zkfc = DFSZKFailoverController.create(
> parser.getConfiguration());
> try {
>   System.exit(zkfc.run(parser.getRemainingArgs()));
> } catch (Throwable t) {
>   LOG.error("DFSZKFailOverController exiting due to earlier exception "
>   + t);
>   terminate(1, t);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org