Supervisor failed but workers continue to work

2016-09-17 Thread k-2f...@hotmail.com
Storm UI shows this supervisor is down. But the workers run properly.
The error log of failed supervisor is:

2016-09-18 09:47:13.129 o.a.s.d.supervisor [INFO] Shutting down 
e0c975d4-85c7-45fa-900d-bd3e7240e92c:
2016-09-18 09:47:13.129 o.a.s.config [INFO] GET worker-user
2016-09-18 09:47:13.129 o.a.s.config [WARN] Failed to get worker user for . 
#error {
:cause /home/storm/apache-storm-1.0.1/data/workers-users (Is a directory)
:via
[{:type java.io.FileNotFoundException
   :message /home/storm/apache-storm-1.0.1/data/workers-users (Is a directory)
   :at [java.io.FileInputStream open0 FileInputStream.java -2]}]
:trace
[[java.io.FileInputStream open0 FileInputStream.java -2]
  [java.io.FileInputStream open FileInputStream.java 195]
  [java.io.FileInputStream  FileInputStream.java 138]
  [clojure.java.io$fn__9189 invoke io.clj 229]
  [clojure.java.io$fn__9102$G__9095__9109 invoke io.clj 69]
  [clojure.java.io$fn__9201 invoke io.clj 258]
  [clojure.java.io$fn__9102$G__9095__9109 invoke io.clj 69]
  [clojure.java.io$fn__9163 invoke io.clj 165]
  [clojure.java.io$fn__9115$G__9091__9122 invoke io.clj 69]
  [clojure.java.io$reader doInvoke io.clj 102]
  [clojure.lang.RestFn invoke RestFn.java 410]
  [clojure.lang.AFn applyToHelper AFn.java 154]
  [clojure.lang.RestFn applyTo RestFn.java 132]
  [clojure.core$apply invoke core.clj 632]
  [clojure.core$slurp doInvoke core.clj 6653]
  [clojure.lang.RestFn invoke RestFn.java 410]
  [org.apache.storm.config$get_worker_user invoke config.clj 239]
  [org.apache.storm.daemon.supervisor$shutdown_worker invoke supervisor.clj 281]
  
[org.apache.storm.daemon.supervisor$kill_existing_workers_with_change_in_components
 invoke supervisor.clj 536]
  [org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078 
invoke supervisor.clj 595]
  [org.apache.storm.event$event_manager$fn__8630 invoke event.clj 40]
  [clojure.lang.AFn run AFn.java 22]
  [java.lang.Thread run Thread.java 745]]}
2016-09-18 09:47:13.129 o.a.s.d.supervisor [INFO] Shut down 
e0c975d4-85c7-45fa-900d-bd3e7240e92c:
2016-09-18 09:47:13.129 o.a.s.d.supervisor [INFO] Creating symlinks for 
worker-id: f8ff0276-ead3-4017-88f3-5d73b5e258e9 storm-id: DPI-17-1474162982 for 
files(1): ("resources")
2016-09-18 09:47:13.132 o.a.s.event [ERROR] Error when processing event
java.nio.file.NoSuchFileException: 
/home/storm/apache-storm-1.0.1/data/workers/f8ff0276-ead3-4017-88f3-5d73b5e258e9/resources
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.createSymbolicLink(UnixFileSystemProvider.java:457)
at java.nio.file.Files.createSymbolicLink(Files.java:1043)
at org.apache.storm.util$create_symlink_BANG_.invoke(util.clj:606)
at org.apache.storm.util$create_symlink_BANG_.invoke(util.clj:596)
at 
org.apache.storm.daemon.supervisor$create_blobstore_links.invoke(supervisor.clj:1038)
at 
org.apache.storm.daemon.supervisor$fn__9341.invoke(supervisor.clj:1153)
at clojure.lang.MultiFn.invoke(MultiFn.java:251)
at 
org.apache.storm.daemon.supervisor$get_valid_new_worker_ids$iter__8926__8930$fn__8931.invoke(supervisor.clj:380)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.RT.seq(RT.java:507)
at clojure.core$seq__4128.invoke(core.clj:137)
at clojure.core$dorun.invoke(core.clj:3009)
at clojure.core$doall.invoke(core.clj:3025)
at 
org.apache.storm.daemon.supervisor$get_valid_new_worker_ids.invoke(supervisor.clj:367)
at 
org.apache.storm.daemon.supervisor$sync_processes.invoke(supervisor.clj:428)
at clojure.core$partial$fn__4527.invoke(core.clj:2492)
at org.apache.storm.event$event_manager$fn__8630.invoke(event.clj:40)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)
2016-09-18 09:47:13.135 o.a.s.util [ERROR] Halting process: ("Error when 
processing an event")
java.lang.RuntimeException: ("Error when processing an event")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341)
at clojure.lang.RestFn.invoke(RestFn.java:423)
at org.apache.storm.event$event_manager$fn__8630.invoke(event.clj:48)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)

Regard ,
Junfeng


Re: Storm 1.0.2 - KafkaSpout cannot find partition information

2016-09-17 Thread Ambud Sharma
I believe you are overriding the value here:

spoutConfig.zkRoot = "/brokers”;

On Sep 17, 2016 9:29 AM, "Dominik Safaric"  wrote:

> Already configured with an empty string, but unfortunately I keep getting
> the same message claiming no partitions can be found.
>
> Dominik Šafarić
>
> On 17 Sep 2016, at 18:11, Ambud Sharma  wrote:
>
> The Zkroot should be empty string.
>
> On Sep 17, 2016 9:09 AM, "Dominik Safaric" 
> wrote:
>
>> Hi,
>>
>> I’ve deployed a topology consisting of a KafkaSpout using Kafka 0.10.0.1
>> and Zookeeper 3.4.6. All of the services, including the Nimbus and
>> Supervisor, run on the same instance.
>>
>> However, by examining the worker.log file, I’ve found that the KafkaSpout
>> is unable to find partitions information of the given topic. The following
>> log messages for each of the 12 partitions appear:
>>
>> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] No partition
>> information found, using configuration to determine offset
>> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Last commit
>> offset from zookeeper: 0
>> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Commit offset 0
>> is more than 9223372036854775807 behind latest offset 0, resetting to
>> startOffsetTime=-2
>>
>> Whereas, the SpoutConf is instantiated as follows:
>>
>> SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", new String(),
>> UUID.randomUUID().toString());
>> spoutConfig.scheme = new RawMultiScheme();
>> spoutConfig.zkServers = Arrays.asList("localhost");
>> spoutConfig.zkPort = 2181;
>> spoutConfig.zkRoot = "/brokers”;
>>
>> By examining Zookeeper’s zNodes using the zkCli script, I’ve found that
>> the partitions information is stored properly as prescribed by the default
>> configuration: /brokers/topics//partitions/…
>>
>> What exactly might be the reason behind KafkaSpout not being able to
>> consume messages from Kafka, i.e. find partition information? Is there a
>> configuration mistake I’ve made?
>>
>> Thanks in advance!
>>
>> Dominik
>>
>


Re: Storm 1.0.2 - KafkaSpout cannot find partition information

2016-09-17 Thread Dominik Safaric
Already configured with an empty string, but unfortunately I keep getting the 
same message claiming no partitions can be found. 

Dominik Šafarić

> On 17 Sep 2016, at 18:11, Ambud Sharma  wrote:
> 
> The Zkroot should be empty string.
> 
> 
>> On Sep 17, 2016 9:09 AM, "Dominik Safaric"  wrote:
>> Hi,
>> 
>> I’ve deployed a topology consisting of a KafkaSpout using Kafka 0.10.0.1 and 
>> Zookeeper 3.4.6. All of the services, including the Nimbus and Supervisor, 
>> run on the same instance. 
>> 
>> However, by examining the worker.log file, I’ve found that the KafkaSpout is 
>> unable to find partitions information of the given topic. The following log 
>> messages for each of the 12 partitions appear:
>> 
>> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] No partition 
>> information found, using configuration to determine offset
>> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Last commit offset 
>> from zookeeper: 0
>> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Commit offset 0 is 
>> more than 9223372036854775807 behind latest offset 0, resetting to 
>> startOffsetTime=-2
>> 
>> Whereas, the SpoutConf is instantiated as follows:
>>  
>> SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", new String(), 
>> UUID.randomUUID().toString());
>> spoutConfig.scheme = new RawMultiScheme();
>> spoutConfig.zkServers = Arrays.asList("localhost");
>> spoutConfig.zkPort = 2181;
>> spoutConfig.zkRoot = "/brokers”;
>> 
>> By examining Zookeeper’s zNodes using the zkCli script, I’ve found that the 
>> partitions information is stored properly as prescribed by the default 
>> configuration: /brokers/topics//partitions/…
>> 
>> What exactly might be the reason behind KafkaSpout not being able to consume 
>> messages from Kafka, i.e. find partition information? Is there a 
>> configuration mistake I’ve made? 
>> 
>> Thanks in advance!
>> 
>> Dominik


Re: Storm 1.0.2 - KafkaSpout cannot find partition information

2016-09-17 Thread Ambud Sharma
The Zkroot should be empty string.

On Sep 17, 2016 9:09 AM, "Dominik Safaric"  wrote:

> Hi,
>
> I’ve deployed a topology consisting of a KafkaSpout using Kafka 0.10.0.1
> and Zookeeper 3.4.6. All of the services, including the Nimbus and
> Supervisor, run on the same instance.
>
> However, by examining the worker.log file, I’ve found that the KafkaSpout
> is unable to find partitions information of the given topic. The following
> log messages for each of the 12 partitions appear:
>
> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] No partition
> information found, using configuration to determine offset
> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Last commit
> offset from zookeeper: 0
> 2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Commit offset 0
> is more than 9223372036854775807 behind latest offset 0, resetting to
> startOffsetTime=-2
>
> Whereas, the SpoutConf is instantiated as follows:
>
> SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", new String(),
> UUID.randomUUID().toString());
> spoutConfig.scheme = new RawMultiScheme();
> spoutConfig.zkServers = Arrays.asList("localhost");
> spoutConfig.zkPort = 2181;
> spoutConfig.zkRoot = "/brokers”;
>
> By examining Zookeeper’s zNodes using the zkCli script, I’ve found that
> the partitions information is stored properly as prescribed by the default
> configuration: /brokers/topics//partitions/…
>
> What exactly might be the reason behind KafkaSpout not being able to
> consume messages from Kafka, i.e. find partition information? Is there a
> configuration mistake I’ve made?
>
> Thanks in advance!
>
> Dominik
>


Storm 1.0.2 - KafkaSpout cannot find partition information

2016-09-17 Thread Dominik Safaric
Hi,

I’ve deployed a topology consisting of a KafkaSpout using Kafka 0.10.0.1 and 
Zookeeper 3.4.6. All of the services, including the Nimbus and Supervisor, run 
on the same instance. 

However, by examining the worker.log file, I’ve found that the KafkaSpout is 
unable to find partitions information of the given topic. The following log 
messages for each of the 12 partitions appear:

2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] No partition 
information found, using configuration to determine offset
2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Last commit offset from 
zookeeper: 0
2016-09-17 17:37:33.333 o.a.s.k.PartitionManager [INFO] Commit offset 0 is more 
than 9223372036854775807 behind latest offset 0, resetting to startOffsetTime=-2

Whereas, the SpoutConf is instantiated as follows:
 
SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", new String(), 
UUID.randomUUID().toString());
spoutConfig.scheme = new RawMultiScheme();
spoutConfig.zkServers = Arrays.asList("localhost");
spoutConfig.zkPort = 2181;
spoutConfig.zkRoot = "/brokers”;

By examining Zookeeper’s zNodes using the zkCli script, I’ve found that the 
partitions information is stored properly as prescribed by the default 
configuration: /brokers/topics//partitions/…

What exactly might be the reason behind KafkaSpout not being able to consume 
messages from Kafka, i.e. find partition information? Is there a configuration 
mistake I’ve made? 

Thanks in advance!

Dominik

Re: SpoutConfig zkRoot argument causing KafkaSpout exception

2016-09-17 Thread Ambud Sharma
It will refer to the root, it uses a slash by default. As you can see you
have 2 in the path:

*"//bytes3b68b144-e13c-4de3-**beed**-405e3ca5ae20/partition_1"*



On Sep 17, 2016 12:49 AM, "Dominik Safaric" 
wrote:

> If the value is set to an empty string, to what path does it actually
> refer to?
>
> Dominik
>
> On 17 Sep 2016, at 09:40, Ambud Sharma  wrote:
>
> Zkroot should be empty string not a /.
>
> Basically that config refers to the path where the consumer offsets will
> be stored.
>
> On Sep 17, 2016 12:20 AM, "Dominik Safaric" 
> wrote:
>
>> Hi,
>>
>> I’ve set up a topology consisting of a Kafka spout. But unfortunately, I
>> keep getting the exception *Caused by:
>> java.lang.IllegalArgumentException: Invalid path string
>> "//bytes3b68b144-e13c-4de3-beed-405e3ca5ae20/partition_1" caused by empty
>> node name specified @1*.
>>
>> Zookeeper has the default client port set (i.e. 2181), whereas the
>> brokers path is default as well.
>>
>> I supply SpoutConfig with the following arguments:
>>
>> *SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", "/", "bytes" +
>> UUID.randomUUID().toString());*
>>
>> However, obviously the problem seems to be in the zkRoot argument I’ve
>> suplied SpoutConfig with.
>>
>> What value should it actually be? What does the zkRoot argument refer to?
>>
>> Thanks in advance!
>>
>


Re: SpoutConfig zkRoot argument causing KafkaSpout exception

2016-09-17 Thread Dominik Safaric
If the value is set to an empty string, to what path does it actually refer to? 

Dominik

> On 17 Sep 2016, at 09:40, Ambud Sharma  wrote:
> 
> Zkroot should be empty string not a /.
> 
> Basically that config refers to the path where the consumer offsets will be 
> stored.
> 
> 
>> On Sep 17, 2016 12:20 AM, "Dominik Safaric"  wrote:
>> Hi,
>> 
>> I’ve set up a topology consisting of a Kafka spout. But unfortunately, I 
>> keep getting the exception Caused by: java.lang.IllegalArgumentException: 
>> Invalid path string 
>> "//bytes3b68b144-e13c-4de3-beed-405e3ca5ae20/partition_1" caused by empty 
>> node name specified @1.
>> 
>> Zookeeper has the default client port set (i.e. 2181), whereas the brokers 
>> path is default as well.
>> 
>> I supply SpoutConfig with the following arguments:
>> 
>> SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", "/", "bytes" + 
>> UUID.randomUUID().toString());
>> 
>> However, obviously the problem seems to be in the zkRoot argument I’ve 
>> suplied SpoutConfig with. 
>> 
>> What value should it actually be? What does the zkRoot argument refer to? 
>> 
>> Thanks in advance!


Re: SpoutConfig zkRoot argument causing KafkaSpout exception

2016-09-17 Thread Ambud Sharma
Zkroot should be empty string not a /.

Basically that config refers to the path where the consumer offsets will be
stored.

On Sep 17, 2016 12:20 AM, "Dominik Safaric" 
wrote:

> Hi,
>
> I’ve set up a topology consisting of a Kafka spout. But unfortunately, I
> keep getting the exception *Caused by:
> java.lang.IllegalArgumentException: Invalid path string
> "//bytes3b68b144-e13c-4de3-beed-405e3ca5ae20/partition_1" caused by empty
> node name specified @1*.
>
> Zookeeper has the default client port set (i.e. 2181), whereas the brokers
> path is default as well.
>
> I supply SpoutConfig with the following arguments:
>
> *SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", "/", "bytes" +
> UUID.randomUUID().toString());*
>
> However, obviously the problem seems to be in the zkRoot argument I’ve
> suplied SpoutConfig with.
>
> What value should it actually be? What does the zkRoot argument refer to?
>
> Thanks in advance!
>


SpoutConfig zkRoot argument causing KafkaSpout exception

2016-09-17 Thread Dominik Safaric
Hi,

I’ve set up a topology consisting of a Kafka spout. But unfortunately, I keep 
getting the exception Caused by: java.lang.IllegalArgumentException: Invalid 
path string "//bytes3b68b144-e13c-4de3-beed-405e3ca5ae20/partition_1" caused by 
empty node name specified @1.

Zookeeper has the default client port set (i.e. 2181), whereas the brokers path 
is default as well.

I supply SpoutConfig with the following arguments:

SpoutConfig spoutConfig = new SpoutConfig(hosts, "bytes", "/", "bytes" + 
UUID.randomUUID().toString());

However, obviously the problem seems to be in the zkRoot argument I’ve suplied 
SpoutConfig with. 

What value should it actually be? What does the zkRoot argument refer to? 

Thanks in advance!