[
https://issues.apache.org/jira/browse/STORM-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14943408#comment-14943408
]
Aaron Dossett commented on STORM-602:
-------------------------------------
My tentative conclusion is that the first error is not a defect. If the hadoop
cluster is down at startup, failing the topology seems reasonable. Without
being able to establish a connection to HDFS and open a file there's no
guarantee that the submitted configuration would ever work when Hadoop comes
back online. Failing at the outset seems like the better option.
Other thoughts?
> HdfsBolt dies when the hadoop node is not available
> ---------------------------------------------------
>
> Key: STORM-602
> URL: https://issues.apache.org/jira/browse/STORM-602
> Project: Apache Storm
> Issue Type: Bug
> Components: storm-hdfs
> Affects Versions: 0.9.3
> Environment: Ubuntu 14.04
> Reporter: clay teahouse
>
> When the hadoop nodes are not available, HdfsBolt generates the following run
> time error, and dies and the topology dies with it too.
> 12154 [Thread-50-hdfsBolt2] ERROR backtype.storm.util - Halting process:
> ("Worker died")
> java.lang.RuntimeException: ("Worker died")
> at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:319)
> [storm-core-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
> at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na]
> at
> backtype.storm.daemon.worker$fn__4770$fn__4771.invoke(worker.clj:452)
> [storm-core-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
> at
> backtype.storm.daemon.executor$mk_executor_data$fn__3287$fn__3288.invoke(executor.clj:239)
> [storm-core-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
> at backtype.storm.util$async_loop$fn__458.invoke(util.clj:467)
> [storm-core-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)