Hi Ahmed,
   It will assign and run it, If I assign each executer with
cluster.assign. But What is the problem with EvenScheduler.


On Thu, Nov 6, 2014 at 8:43 PM, Ahmed El Rheddane <[email protected]
> wrote:

>  As you can see in the following line:
> 2014-11-06 15:58:30 b.s.d.nimbus [INFO] Setting new assignment for
> topology id special-topology-1-1415269710:
> #backtype.storm.daemon.common.Assignment{:master-code-dir
> "/opt/storm-0.9.0.1/storm-data/nimbus/stormdist/special-topology-1-1415269710",
> :node->host {"e5ab32cb-6d83-4764-9074-a081a1f3e8d3" "flamingo-server"},
> :executor->node+port {[2 2] ["e5ab32cb-6d83-4764-9074-a081a1f3e8d3" 6708]},
> :executor->start-time-secs {[2 2] 1415269710}}
>
> Only the special executor is assigned. Also, there is no trace of
> EvenScheduler being called which is rather weird, since your DemoScheduler
> explicitly calls it.
>
> Instead of calling EvenScheduler, you can directly assign the rest of the
> executors:
> cluster.getNeedsSchedulingExecutorToComponents(topology).keySet()
> on the available slots:
> cluster.getAvailableSlots()
> using:
> cluster.assign(slot, topologyId, executors)
>
> I hope this could help.
>
> Ahmed
>
>
> On 11/06/2014 02:00 PM, swapnil joshi wrote:
>
> Still same error :(
>
> On Thu, Nov 6, 2014 at 4:25 PM, swapnil joshi <[email protected]
> > wrote:
>
>> Nimbus.log
>>
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client environment:host.name
>> =swapnil-lp
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.version=1.7.0_60
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.vendor=Oracle Corporation
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.home=/opt/jdk7/jre
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.class.path=/opt/storm-0.9.0.1/storm-core-0.9.0.1.jar:/opt/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/opt/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/opt/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/opt/storm-0.9.0.1/lib/commons-codec-1.4.jar:/opt/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/opt/storm-0.9.0.1/lib/minlog-1.2.jar:/opt/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/opt/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/opt/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/opt/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/opt/storm-0.9.0.1/lib/clout-1.0.1.jar:/opt/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/opt/storm-0.9.0.1/lib/kryo-2.17.jar:/opt/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/opt/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/opt/storm-0.9.0.1/lib/commons-exec-1.1.jar:/opt/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/opt/storm-0.9.0.1/lib/compojure-1.1.3.jar:/opt/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/opt/storm-0.9.0.1/lib/clojure-1.4.0.jar:/opt/storm-0.9.0.1/lib/storm-lib!
>> -1.0.0-SNA
>> PSHOT.jar:/opt/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/opt/storm-0.9.0.1/lib/guava-13.0.jar:/opt/storm-0.9.0.1/lib/commons-lang-2.5.jar:/opt/storm-0.9.0.1/lib/jetty-6.1.26.jar:/opt/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/opt/storm-0.9.0.1/lib/jline-0.9.94.jar:/opt/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/opt/storm-0.9.0.1/lib/joda-time-2.0.jar:/opt/storm-0.9.0.1/lib/commons-io-1.4.jar:/opt/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/opt/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/opt/storm-0.9.0.1/lib/clj-time-0.4.1.jar:/opt/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/opt/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/opt/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/opt/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/opt/storm-0.9.0.1/lib/servlet-api-2.5.jar:/opt/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/opt/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/opt/storm-0.9.0.1/lib/json-simple-1.1.jar:/opt/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/opt/storm-0.9.0.1/lib/junit-3.8.1.jar:/opt/storm-0!
>> .9.0.1/lib
>> /commons-logging-1.1.1.jar:/opt/storm-0.9.0.1/lib/asm-4.0.jar:/opt/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/opt/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/opt/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/opt/storm-0.9.0.1/lib/httpcore-4.1.jar:/opt/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/opt/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/opt/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/opt/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/opt/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/opt/storm-0.9.0.1/lib/objenesis-1.2.jar:/opt/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/opt/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/opt/storm-0.9.0.1/conf
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.io.tmpdir=/tmp
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:java.compiler=<NA>
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client environment:os.name
>> =Linux
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:os.arch=amd64
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:os.version=3.2.0-30-generic
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client environment:user.name
>> =swapnil
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:user.home=/home/swapnil
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Client
>> environment:user.dir=/opt/storm-0.9.0.1
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server environment:
>> host.name=swapnil-lp
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.version=1.7.0_60
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.vendor=Oracle Corporation
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.home=/opt/jdk7/jre
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.class.path=/opt/storm-0.9.0.1/storm-core-0.9.0.1.jar:/opt/storm-0.9.0.1/storm-console-logging-0.9.0.1.jar:/opt/storm-0.9.0.1/storm-netty-0.9.0.1.jar:/opt/storm-0.9.0.1/lib/carbonite-1.5.0.jar:/opt/storm-0.9.0.1/lib/commons-codec-1.4.jar:/opt/storm-0.9.0.1/lib/reflectasm-1.07-shaded.jar:/opt/storm-0.9.0.1/lib/minlog-1.2.jar:/opt/storm-0.9.0.1/lib/commons-fileupload-1.2.1.jar:/opt/storm-0.9.0.1/lib/snakeyaml-1.11.jar:/opt/storm-0.9.0.1/lib/jzmq-2.1.0.jar:/opt/storm-0.9.0.1/lib/mockito-all-1.9.5.jar:/opt/storm-0.9.0.1/lib/clout-1.0.1.jar:/opt/storm-0.9.0.1/lib/hiccup-0.3.6.jar:/opt/storm-0.9.0.1/lib/kryo-2.17.jar:/opt/storm-0.9.0.1/lib/ring-core-1.1.5.jar:/opt/storm-0.9.0.1/lib/clojure-complete-0.2.3.jar:/opt/storm-0.9.0.1/lib/commons-exec-1.1.jar:/opt/storm-0.9.0.1/lib/core.incubator-0.1.0.jar:/opt/storm-0.9.0.1/lib/compojure-1.1.3.jar:/opt/storm-0.9.0.1/lib/ring-jetty-adapter-0.3.11.jar:/opt/storm-0.9.0.1/lib/clojure-1.4.0.jar:/opt/storm-0.9.0.1/lib/storm-lib!
>> -1.0.0-SNA
>> PSHOT.jar:/opt/storm-0.9.0.1/lib/math.numeric-tower-0.0.1.jar:/opt/storm-0.9.0.1/lib/guava-13.0.jar:/opt/storm-0.9.0.1/lib/commons-lang-2.5.jar:/opt/storm-0.9.0.1/lib/jetty-6.1.26.jar:/opt/storm-0.9.0.1/lib/slf4j-api-1.6.5.jar:/opt/storm-0.9.0.1/lib/jline-0.9.94.jar:/opt/storm-0.9.0.1/lib/zookeeper-3.3.3.jar:/opt/storm-0.9.0.1/lib/joda-time-2.0.jar:/opt/storm-0.9.0.1/lib/commons-io-1.4.jar:/opt/storm-0.9.0.1/lib/netty-3.6.3.Final.jar:/opt/storm-0.9.0.1/lib/disruptor-2.10.1.jar:/opt/storm-0.9.0.1/lib/clj-time-0.4.1.jar:/opt/storm-0.9.0.1/lib/jetty-util-6.1.26.jar:/opt/storm-0.9.0.1/lib/logback-core-1.0.6.jar:/opt/storm-0.9.0.1/lib/ring-devel-0.3.11.jar:/opt/storm-0.9.0.1/lib/tools.macro-0.1.0.jar:/opt/storm-0.9.0.1/lib/servlet-api-2.5.jar:/opt/storm-0.9.0.1/lib/log4j-over-slf4j-1.6.6.jar:/opt/storm-0.9.0.1/lib/curator-client-1.0.1.jar:/opt/storm-0.9.0.1/lib/json-simple-1.1.jar:/opt/storm-0.9.0.1/lib/logback-classic-1.0.6.jar:/opt/storm-0.9.0.1/lib/junit-3.8.1.jar:/opt/storm-0!
>> .9.0.1/lib
>> /commons-logging-1.1.1.jar:/opt/storm-0.9.0.1/lib/asm-4.0.jar:/opt/storm-0.9.0.1/lib/servlet-api-2.5-20081211.jar:/opt/storm-0.9.0.1/lib/jgrapht-0.8.3.jar:/opt/storm-0.9.0.1/lib/curator-framework-1.0.1.jar:/opt/storm-0.9.0.1/lib/httpcore-4.1.jar:/opt/storm-0.9.0.1/lib/libthrift7-0.7.0-2.jar:/opt/storm-0.9.0.1/lib/httpclient-4.1.1.jar:/opt/storm-0.9.0.1/lib/clj-stacktrace-0.2.2.jar:/opt/storm-0.9.0.1/lib/tools.cli-0.2.2.jar:/opt/storm-0.9.0.1/lib/ring-servlet-0.3.11.jar:/opt/storm-0.9.0.1/lib/objenesis-1.2.jar:/opt/storm-0.9.0.1/lib/tools.nrepl-0.2.3.jar:/opt/storm-0.9.0.1/lib/tools.logging-0.2.3.jar:/opt/storm-0.9.0.1/conf
>>
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.io.tmpdir=/tmp
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:java.compiler=<NA>
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server environment:
>> os.name=Linux
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:os.arch=amd64
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:os.version=3.2.0-30-generic
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server environment:
>> user.name=swapnil
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:user.home=/home/swapnil
>> 2014-11-06 15:58:17 o.a.z.s.ZooKeeperServer [INFO] Server
>> environment:user.dir=/opt/storm-0.9.0.1
>> 2014-11-06 15:58:17 b.s.d.nimbus [INFO] Starting Nimbus with conf
>> {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper",
>> "topology.tick.tuple.freq.secs" nil,
>> "topology.builtin.metrics.bucket.size.secs" 60,
>> "topology.fall.back.on.java.serialization" true,
>> "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 5000,
>> "topology.skip.missing.kryo.registrations" false,
>> "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m",
>> "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true,
>> "topology.trident.batch.emit.interval.millis" 500,
>> "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m",
>> "java.library.path" "/usr/local/lib:/opt/local/lib:/usr/lib",
>> "topology.executor.send.buffer.size" 1024, "storm.local.dir"
>> "/opt/storm-0.9.0.1/storm-data", "storm.messaging.netty.buffer_size"
>> 5242880, "supervisor.worker.start.timeout.secs" 120,
>> "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs"
>> 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64,
>> "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "192.168.1.13",
>> "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2181,
>> "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size"
>> 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root"
>> "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000,
>> "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1,
>> "storm.zookeeper.servers" ["192.168.1.13"], "transactional.zookeeper.root"
>> "/transactional", "topology.acker.executors" nil,
>> "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil,
>> "drpc.queue.size" 128, "worker.childopts" "-Xmx768m",
>> "supervisor.heartbeat.frequency.secs" 5,
>> "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772,
>> "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m",
>> "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3,
>> "topology.tasks" nil, "storm.messaging.netty.max_retries" 30,
>> "topology.spout.wait.strategy"
>> "backtype.storm.spout.SleepSpoutWaitStrategy", "topology.max.spout.pending"
>> nil, "storm.zookeeper.retry.interval" 1000, "
>> topology.sleep.spout.wait.strategy.time.ms" 1,
>> "nimbus.topology.validator"
>> "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports"
>> [6710 6711], "topology.debug" false, "nimbus.task.launch.secs" 120,
>> "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30,
>> "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts"
>> "-Xmx256m", "nimbus.thrift.port" 6627, "storm.scheduler"
>> "storm.DemoScheduler", "topology.stats.sample.rate" 0.05,
>> "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer"
>> "backtype.storm.serialization.types.ListDelegateSerializer",
>> "topology.disruptor.wait.strategy"
>> "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30,
>> "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory"
>> "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port"
>> 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times"
>> 5, "storm.thrift.transport"
>> "backtype.storm.security.auth.SimpleTransportPlugin",
>> "topology.state.synchronization.timeout.secs" 60,
>> "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs"
>> 600, "storm.messaging.transport" "backtype.storm.messaging.zmq", "
>> logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000,
>> "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port"
>> 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "distributed",
>> "topology.optimize" true, "topology.max.task.parallelism" nil,
>> "supervisor.scheduler.meta" {"name" "normal-supervisor"}}
>> 2014-11-06 15:58:17 b.s.d.nimbus [INFO] Using custom scheduler:
>> storm.DemoScheduler
>> 2014-11-06 15:58:17 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Initiating client connection,
>> connectString=192.168.1.13:2181 sessionTimeout=20000
>> watcher=com.netflix.curator.ConnectionState@4ce2c6cd
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] Opening socket connection to
>> server /192.168.1.13:2181
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] Socket connection established
>> to swapnil-lp/192.168.1.13:2181, initiating session
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] Session establishment
>> complete on server swapnil-lp/192.168.1.13:2181, sessionid =
>> 0x14984a2f6e50000, negotiated timeout = 40000
>> 2014-11-06 15:58:17 b.s.zookeeper [INFO] Zookeeper state update:
>> :connected:none
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Session: 0x14984a2f6e50000
>> closed
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] EventThread shut down
>> 2014-11-06 15:58:17 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
>> 2014-11-06 15:58:17 o.a.z.ZooKeeper [INFO] Initiating client connection,
>> connectString=192.168.1.13:2181/storm sessionTimeout=20000
>> watcher=com.netflix.curator.ConnectionState@65e3bf5f
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] Opening socket connection to
>> server /192.168.1.13:2181
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] Socket connection established
>> to swapnil-lp/192.168.1.13:2181, initiating session
>> 2014-11-06 15:58:17 o.a.z.ClientCnxn [INFO] Session establishment
>> complete on server swapnil-lp/192.168.1.13:2181, sessionid =
>> 0x14984a2f6e50001, negotiated timeout = 40000
>> 2014-11-06 15:58:18 b.s.d.nimbus [INFO] Starting Nimbus server...
>> 2014-11-06 15:58:30 b.s.d.nimbus [INFO] Uploading file from client to
>> /opt/storm-0.9.0.1/storm-data/nimbus/inbox/stormjar-77307906-2a53-42be-8bc4-be53db02c2f0.jar
>> 2014-11-06 15:58:30 b.s.d.nimbus [INFO] Finished uploading file from
>> client:
>> /opt/storm-0.9.0.1/storm-data/nimbus/inbox/stormjar-77307906-2a53-42be-8bc4-be53db02c2f0.jar
>> 2014-11-06 15:58:30 b.s.d.nimbus [INFO] Received topology submission for
>> special-topology with conf {"topology.max.task.parallelism" nil,
>> "topology.acker.executors" nil, "topology.kryo.register" nil,
>> "topology.kryo.decorators" (), "topology.name" "special-topology", "
>> storm.id" "special-topology-1-1415269710", "wordsFile" "/opt/words.txt",
>> "topology.debug" false, "topology.max.spout.pending" 1}
>> 2014-11-06 15:58:30 b.s.d.nimbus [INFO] Activating special-topology:
>> special-topology-1-1415269710
>> 2014-11-06 15:58:30 b.s.d.nimbus [INFO] Setting new assignment for
>> topology id special-topology-1-1415269710:
>> #backtype.storm.daemon.common.Assignment{:master-code-dir
>> "/opt/storm-0.9.0.1/storm-data/nimbus/stormdist/special-topology-1-1415269710",
>> :node->host {"e5ab32cb-6d83-4764-9074-a081a1f3e8d3" "flamingo-server"},
>> :executor->node+port {[2 2] ["e5ab32cb-6d83-4764-9074-a081a1f3e8d3" 6708]},
>> :executor->start-time-secs {[2 2] 1415269710}}
>> 2014-11-06 15:58:38 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[3 3] not alive
>> 2014-11-06 15:58:38 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[4 4] not alive
>> 2014-11-06 15:58:38 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[1 1] not alive
>> 2014-11-06 15:58:49 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[3 3] not alive
>> 2014-11-06 15:58:49 b.s.d.nimbus [INFO] Executor
>> special-topology-1-141526971 0:[4 4] not alive
>>
>> 2014-11-06 15:58:49 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[1 1] not alive
>> 2014-11-06 15:58:59 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[3 3] not alive
>> 2014-11-06 15:58:59 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[4 4] not alive
>> 2014-11-06 15:58:59 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[1 1] not alive
>> 2014-11-06 15:59:09 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[3 3] not alive
>> 2014-11-06 15:59:09 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[4 4] not alive
>> 2014-11-06 15:59:09 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[1 1] not alive
>> 2014-11-06 15:59:19 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[3 3] not alive
>> 2014-11-06 15:59:19 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[4 4] not alive
>> 2014-11-06 15:59:19 b.s.d.nimbus [INFO] Executor
>> special-topology-1-1415269710:[1 1] not alive
>> 2014-11-06 15:59:22 b.s.d.nimbus [INFO] Delaying event :remove for 1 secs
>> for special-topology-1-1415269710
>> 2014-11-06 15:59:22 b.s.d.nimbus [INFO] Updated
>> special-topology-1-1415269710 with status {:type :killed, :kill-time-secs 1}
>> 2014-11-06 15:59:29 b.s.d.nimbus [INFO] Killing topology:
>> special-topology-1-1415269710
>> 2014-11-06 15:59:29 b.s.d.nimbus [INFO] Cleaning up
>> special-topology-1-1415269710
>> 2014-11-06 15:59:35 b.s.d.nimbus [INFO] Shutting down master
>> 2014-11-06 15:59:35 o.a.z.ClientCnxn [INFO] EventThread shut down
>> 2014-11-06 15:59:35 o.a.z.ZooKeeper [INFO] Session: 0x14984a2f6e50001
>> closed
>> 2014-11-06 15:59:35 b.s.d.nimbus [INFO] Shut down master
>>
>>
>> On Thu, Nov 6, 2014 at 3:33 PM, Ahmed El Rheddane <
>> [email protected]> wrote:
>>
>>>  Don't mention it :)
>>>
>>> Can't you see any traces from Storm's EvenScheduler in nimbus.log?
>>>
>>> Ahmed
>>>
>>>
>>> On 11/06/2014 10:47 AM, swapnil joshi wrote:
>>>
>>>   Hi Ahmed,
>>> First I am saying thank you for your immediate response
>>> Log information as follow :
>>> Scheduler Log :
>>> DemoScheduler: begin scheduling
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]], special-spout=[[2, 2]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [2,
>>> 2]=special-spout, [1, 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {}
>>> Our special-spout needs scheduling.
>>>  SuperVisorrrrrrrrrrrrrrrrrrrrr
>>> Meta Name ======= normal-supervisor
>>>  SuperVisorrrrrrrrrrrrrrrrrrrrr
>>> Meta Name ======= special-supervisor
>>> Found the special-supervisor
>>> WWWWWWWWWWWWe assigned executors:[[2, 2]] to slot:
>>> [6378cebb-463e-407d-b4fe-45a8a18bbf38, 6710]
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> Our special topology needs scheduling.
>>> needs scheduling(component->executor): {__acker=[[1, 1]],
>>> word-counter=[[3, 3]], word-normalizer=[[4, 4]]}
>>> needs scheduling(executor->compoenents): {[3, 3]=word-counter, [1,
>>> 1]=__acker, [4, 4]=word-normalizer}
>>> current assignments: {[2, 2]=6378cebb-463e-407d-b4fe-45a8a18bbf38:6710}
>>> Our special-spout DOES NOT NEED scheduling.
>>> FInalise scheduling
>>> DemoScheduler: begin scheduling
>>> FInalise scheduling
>>>
>>>  Supervisor Log:
>>> 2014-11-06 14:25:10 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>> 2014-11-06 14:25:11 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>> 2014-11-06 14:25:11 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>> 2014-11-06 14:25:12 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>> 2014-11-06 14:25:12 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>> 2014-11-06 14:25:13 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>> 2014-11-06 14:25:13 b.s.d.supervisor [INFO]
>>> d0f56b71-5229-4974-8120-41f368f7f29a still hasn't started
>>>
>>>  Nimbus Log As I provided in above mail
>>>
>>>  Zookeeper Log
>>> 68.1.13:49582
>>> 2014-11-06 14:25:13,720 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617]
>>> - Established session 0x149844d9f4e0007 with negotiated timeout 20000 for
>>> client /192.168.1.13:49582
>>> 2014-11-06 14:28:46,614 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
>>> EndOfStreamException: Unable to read additional data from client
>>> sessionid 0x149844d9f4e0007, likely client has closed socket
>>>         at
>>> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>>>         at
>>> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> 2014-11-06 14:28:46,618 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.13:49582 which had sessionid 0x149844d9f4e0007
>>> 2014-11-06 14:29:06,000 [myid:] - INFO
>>> [SessionTracker:ZooKeeperServer@347] - Expiring session
>>> 0x149844d9f4e0007, timeout of 20000ms exceeded
>>> 2014-11-06 14:29:06,001 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e0007
>>> 2014-11-06 14:47:13,300 [myid:] - WARN  [SyncThread:0:FileTxnLog@334] -
>>> fsync-ing the write ahead log in SyncThread:0 took 1088ms which will
>>> adversely effect operation latency. See the ZooKeeper troubleshooting guide
>>> 2014-11-06 14:53:45,547 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
>>> EndOfStreamException: Unable to read additional data from client
>>> sessionid 0x149844d9f4e0005, likely client has closed socket
>>>         at
>>> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>>>         at
>>> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> 2014-11-06 14:53:45,548 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.2:41944 which had sessionid 0x149844d9f4e0005
>>> 2014-11-06 14:53:52,394 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
>>> EndOfStreamException: Unable to read additional data from client
>>> sessionid 0x149844d9f4e0003, likely client has closed socket
>>>         at
>>> org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
>>>         at
>>> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> 2014-11-06 14:53:52,395 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.13:49571 which had sessionid 0x149844d9f4e0003
>>> 2014-11-06 14:54:04,000 [myid:] - INFO
>>> [SessionTracker:ZooKeeperServer@347] - Expiring session
>>> 0x149844d9f4e0005, timeout of 20000ms exceeded
>>> 2014-11-06 14:54:04,001 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e0005
>>> 2014-11-06 14:54:05,839 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e0001
>>> 2014-11-06 14:54:05,856 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.13:49569 which had sessionid 0x149844d9f4e0001
>>> 2014-11-06 14:54:12,000 [myid:] - INFO
>>> [SessionTracker:ZooKeeperServer@347] - Expiring session
>>> 0x149844d9f4e0003, timeout of 20000ms exceeded
>>> 2014-11-06 14:54:12,000 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e0003
>>> 2014-11-06 14:59:48,769 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket
>>> connection from /192.168.1.13:50491
>>> 2014-11-06 14:59:48,771 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] - Connection request from old
>>> client /192.168.1.13:50491; will be dropped if server is in r-o mode
>>> 2014-11-06 14:59:48,771 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to
>>> establish new session at /192.168.1.13:50491
>>> 2014-11-06 14:59:48,789 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617]
>>> - Established session 0x149844d9f4e0008 with negotiated timeout 20000 for
>>> client /192.168.1.13:50491
>>> 2014-11-06 14:59:48,796 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e0008
>>> 2014-11-06 14:59:48,800 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.13:50491 which had sessionid 0x149844d9f4e0008
>>> 2014-11-06 14:59:48,802 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket
>>> connection from /192.168.1.13:50492
>>> 2014-11-06 14:59:48,802 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] - Connection request from old
>>> client /192.168.1.13:50492; will be dropped if server is in r-o mode
>>> 2014-11-06 14:59:48,802 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to
>>> establish new session at /192.168.1.13:50492
>>> 2014-11-06 14:59:48,811 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617]
>>> - Established session 0x149844d9f4e0009 with negotiated timeout 20000 for
>>> client /192.168.1.13:50492
>>> 2014-11-06 15:00:01,131 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket
>>> connection from /192.168.1.13:50493
>>> 2014-11-06 15:00:01,135 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] - Connection request from old
>>> client /192.168.1.13:50493; will be dropped if server is in r-o mode
>>> 2014-11-06 15:00:01,135 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to
>>> establish new session at /192.168.1.13:50493
>>> 2014-11-06 15:00:01,158 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617]
>>> - Established session 0x149844d9f4e000a with negotiated timeout 20000 for
>>> client /192.168.1.13:50493
>>> 2014-11-06 15:00:01,172 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e000a
>>> 2014-11-06 15:00:01,180 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.13:50493 which had sessionid 0x149844d9f4e000a
>>> 2014-11-06 15:00:01,195 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket
>>> connection from /192.168.1.13:50494
>>> 2014-11-06 15:00:01,196 [myid:] - WARN  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] - Connection request from old
>>> client /192.168.1.13:50494; will be dropped if server is in r-o mode
>>> 2014-11-06 15:00:01,196 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] - Client attempting to
>>> establish new session at /192.168.1.13:50494
>>> 2014-11-06 15:00:01,202 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@617]
>>> - Established session 0x149844d9f4e000b with negotiated timeout 20000 for
>>> client /192.168.1.13:50494
>>> 2014-11-06 15:11:36,380 [myid:] - INFO  [ProcessThread(sid:0
>>> cport:-1)::PrepRequestProcessor@494] - Processed session termination
>>> for sessionid: 0x149844d9f4e0009
>>> 2014-11-06 15:11:36,403 [myid:] - INFO  [NIOServerCxn.Factory:
>>> 0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
>>> client /192.168.1.13:50492 which had sessionid 0x149844d9f4e0009
>>>
>>>
>>>
>>>
>>> On Thu, Nov 6, 2014 at 2:58 PM, Ahmed El Rheddane <
>>> [email protected]> wrote:
>>>
>>>>  Can you share the Scheduler's logs as well?
>>>>
>>>> Ahmed
>>>>
>>>>
>>>> On 11/06/2014 09:56 AM, swapnil joshi wrote:
>>>>
>>>> yes, I had checked. "new EvenScheduler().schedule(topol
>>>> ogies, cluster);" called at end.
>>>>
>>>> On Thu, Nov 6, 2014 at 2:12 PM, Ahmed El Rheddane <
>>>> [email protected]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Are you sure you called "new EvenScheduler().schedule(topologies,
>>>>> cluster);" at the end of the schedule method?
>>>>>
>>>>> Ahmed
>>>>>
>>>>>
>>>>> On 11/06/2014 08:01 AM, swapnil joshi wrote:
>>>>>
>>>>>> Hi Storm,
>>>>>>
>>>>>> I am new with storm. I want schedule my topology. I got some use full
>>>>>> inofrmation on following address :
>>>>>>
>>>>>> http://xumingming.sinaapp.com/885/twitter-storm-how-to-develop-a-pluggable-scheduler/
>>>>>>
>>>>>> But when i am submitting my topology to my storm cluster.
>>>>>> special spout is running on special supervisor machine but it does
>>>>>> not start may other bolt.
>>>>>>
>>>>>> At nimbus log, I got following error :
>>>>>> 2014-11-05 19:24:55 b.s.d.nimbus [INFO] Executor
>>>>>> special-topology-1-1415195688:[3 3] not alive
>>>>>> 2014-11-05 19:24:55 b.s.d.nimbus [INFO] Executor
>>>>>> special-topology-1-1415195688:[4 4] not alive
>>>>>> 2014-11-05 19:24:55 b.s.d.nimbus [INFO] Executor
>>>>>> special-topology-1-1415195688:[1 1] not alive
>>>>>> 2014-11-05 19:25:05 b.s.d.nimbus [INFO] Executor
>>>>>> special-topology-1-1415195688:[3 3] not alive
>>>>>> 2014-11-05 19:25:05 b.s.d.nimbus [INFO] Executor
>>>>>> special-topology-1-1415195688:[4 4] not alive
>>>>>> 2014-11-05 19:25:05 b.s.d.nimbus [INFO] Executor
>>>>>> special-topology-1-1415195688:[1 1] not alive
>>>>>>
>>>>>> Why He can't start other worker on other machine
>>>>>>
>>>>>> Technical Specification :
>>>>>> I am using storm-0.9.0.1 version.
>>>>>>
>>>>>> I have two machine
>>>>>> nimbus : 192.168.1.13
>>>>>> normal-supervisor : 192.168.1.13
>>>>>> special-supervisor : 192.168.1.2
>>>>>>
>>>>>> I am waiting for user valuable guidance.
>>>>>> Thank You In advance :)
>>>>>> --
>>>>>> Regards,
>>>>>> Swapnil K. Joshi
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Swapnil K. Joshi
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Swapnil K. Joshi
>>>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Swapnil K. Joshi
>>
>
>
>
> --
> Regards,
> Swapnil K. Joshi
>
>
>


-- 
Regards,
Swapnil K. Joshi

Reply via email to