this what i have in storm.yaml , Is something wrong there ? ########### These MUST be filled in for a storm configuration storm.zookeeper.servers: - "localhost" storm.zookeeper.port: 2181 storm.zookeeper.root: "/storm" storm.zookeeper.session.timeout: 20000 storm.zookeeper.connection.timeout: 15000 storm.zookeeper.retry.times: 5 storm.zookeeper.retry.interval: 1000 storm.zookeeper.retry.intervalceiling.millis: 30000 storm.zookeeper.auth.user: null storm.zookeeper.auth.password: null storm.cluster.mode: "distributed" # can be distributed or local storm.local.mode.zmq: false storm.thrift.transport: "backtype.storm.security.auth.SimpleTransportPlugin" storm.principal.tolocal: "backtype.storm.security.auth.DefaultPrincipalToLocal" storm.group.mapping.service: "backtype.storm.security.auth.ShellBasedGroupsMapping" storm.group.mapping.service.params: null storm.messaging.transport: "backtype.storm.messaging.netty.Context" storm.nimbus.retry.times: 5 storm.nimbus.retry.interval.millis: 2000 storm.nimbus.retry.intervalceiling.millis: 60000 storm.auth.simple-white-list.users: [] storm.auth.simple-acl.users: [] storm.auth.simple-acl.users.commands: [] storm.auth.simple-acl.admins: [] storm.meta.serialization.delegate: "backtype.storm.serialization.GzipThriftSerializationDelegate" storm.codedistributor.class: "backtype.storm.codedistributor.LocalFileSystemCodeDistributor"
# # ##### These may optionally be filled in: # ## List of custom serializations # topology.kryo.register: # - org.mycompany.MyType # - org.mycompany.MyType2: org.mycompany.MyType2Serializer # ## List of custom kryo decorators # topology.kryo.decorators: # - org.mycompany.MyDecorator # ## Locations of the drpc servers drpc.servers: - "server1" - "server2" drpc.port: 3772 drpc.worker.threads: 64 drpc.max_buffer_size: 1048576 drpc.queue.size: 128 drpc.invocations.port: 3773 drpc.invocations.threads: 64 drpc.request.timeout.secs: 600 drpc.childopts: "-Xmx768m" drpc.http.port: 3774 drpc.https.port: -1 drpc.https.keystore.password: "" drpc.https.keystore.type: "JKS" drpc.http.creds.plugin: backtype.storm.security.auth.DefaultHttpCredentialsPlugin drpc.authorizer.acl.filename: "drpc-auth-acl.yaml" drpc.authorizer.acl.strict: false transactional.zookeeper.root: "/transactional" transactional.zookeeper.servers: null transactional.zookeeper.port: null supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703 supervisor.childopts: "-Xmx256m" supervisor.run.worker.as.user: false #how long supervisor will wait to ensure that a worker process is started supervisor.worker.start.timeout.secs: 120 #how long between heartbeats until supervisor considers that worker dead and tries to restart it supervisor.worker.timeout.secs: 30 #how many seconds to sleep for before shutting down threads on worker supervisor.worker.shutdown.sleep.secs: 1 #how frequently the supervisor checks on the status of the processes it's monitoring and restarts if necessary supervisor.monitor.frequency.secs: 3 #how frequently the supervisor heartbeats to the cluster state (for nimbus) supervisor.heartbeat.frequency.secs: 5 supervisor.enable: true supervisor.supervisors: [] supervisor.supervisors.commands: [] ### worker.* configs are for task workers worker.childopts: "-Xmx768m" worker.gc.childopts: "" worker.heartbeat.frequency.secs: 1 # control how many worker receiver threads we need per worker topology.worker.receiver.thread.count: 1 task.heartbeat.frequency.secs: 3 task.refresh.poll.secs: 10 task.credentials.poll.secs: 30 # now should be null by default topology.backpressure.enable: true backpressure.disruptor.high.watermark: 0.9 backpressure.disruptor.low.watermark: 0.4 zmq.threads: 1 zmq.linger.millis: 5000 zmq.hwm: 0 storm.messaging.netty.server_worker_threads: 1 storm.messaging.netty.client_worker_threads: 1 storm.messaging.netty.buffer_size: 5242880 #5MB buffer # Since nimbus.task.launch.secs and supervisor.worker.start.timeout.secs are 120, other workers should also wait at least that long before giving up on connecting to the other worker. The reconnection period need also be bigger than storm.zookeeper.session.timeout(default is 20s), so that we can abort the reconnection when the target worker is dead. storm.messaging.netty.max_retries: 300 storm.messaging.netty.max_wait_ms: 1000 storm.messaging.netty.min_wait_ms: 100 # If the Netty messaging layer is busy(netty internal buffer not writable), the Netty client will try to batch message as more as possible up to the size of storm.messaging.netty.transfer.batch.size bytes, otherwise it will try to flush message as soon as possible to reduce latency. storm.messaging.netty.transfer.batch.size: 262144 # Sets the backlog value to specify when the channel binds to a local address storm.messaging.netty.socket.backlog: 500 # By default, the Netty SASL authentication is set to false. Users can override and set it true for a specific topology. storm.messaging.netty.authentication: false # default number of seconds group mapping service will cache user group storm.group.mapping.service.cache.duration.secs: 120 nimbus.thrift.max_buffer_size: 80000000 nimbus.seeds : ["localhost"] nimbus.thrift.port: 6627 nimbus.thrift.threads: 64 nimbus.childopts: "-Xmx1024m" nimbus.task.timeout.secs: 30 nimbus.supervisor.timeout.secs: 60 nimbus.monitor.freq.secs: 10 nimbus.cleanup.inbox.freq.secs: 600 nimbus.inbox.jar.expiration.secs: 3600 nimbus.code.sync.freq.secs: 300 nimbus.task.launch.secs: 120 nimbus.reassign: true nimbus.file.copy.expiration.secs: 600 nimbus.topology.validator: "backtype.storm.nimbus.DefaultTopologyValidator" topology.min.replication.count: 1 topology.max.replication.wait.time.sec: 60 nimbus.credential.renewers.freq.secs: 600 ### topology.* configs are for specific executing storms topology.enable.message.timeouts: true topology.debug: false topology.workers: 1 topology.acker.executors: null topology.eventlogger.executors: null topology.tasks: null # maximum amount of time a message has to complete before it's considered failed topology.message.timeout.secs: 30 topology.multilang.serializer: "backtype.storm.multilang.JsonSerializer" topology.skip.missing.kryo.registrations: false topology.max.task.parallelism: null topology.max.spout.pending: null topology.state.synchronization.timeout.secs: 60 topology.stats.sample.rate: 0.05 topology.builtin.metrics.bucket.size.secs: 60 topology.fall.back.on.java.serialization: true topology.worker.childopts: null topology.worker.logwriter.childopts: "-Xmx64m" topology.executor.receive.buffer.size: 1024 #batched topology.executor.send.buffer.size: 1024 #individual messages topology.transfer.buffer.size: 1024 # batched topology.tick.tuple.freq.secs: null topology.worker.shared.thread.pool.size: 4 topology.disruptor.wait.strategy: "com.lmax.disruptor.BlockingWaitStrategy" topology.spout.wait.strategy: "backtype.storm.spout.SleepSpoutWaitStrategy" topology.sleep.spout.wait.strategy.time.ms: 1 topology.error.throttle.interval.secs: 10 topology.max.error.report.per.interval: 5 topology.kryo.factory: "backtype.storm.serialization.DefaultKryoFactory" topology.tuple.serializer: "backtype.storm.serialization.types.ListDelegateSerializer" topology.trident.batch.emit.interval.millis: 500 topology.testing.always.try.serialize: false topology.classpath: null topology.environment: null topology.bolts.outgoing.overflow.buffer.enable: false topology.disruptor.wait.timeout.millis: 1000 On Mon, Sep 28, 2015 at 3:54 AM, researcher cs <[email protected]> wrote: > Thanks for replying , i don't have a parameter for topology jar size and i > increased nimbus.thrift.max_buffer_size for 40000000 but problem still > > On Mon, Sep 28, 2015 at 3:40 AM, Debaditya Goswami < > [email protected]> wrote: > >> Hi, >> >> >> Could you confirm the size of your topology.jar file? >> >> >> Then compare this with the nimbus.thrift.max_buffer_size parameter >> in your storm.yaml configuration. This storm.yaml file will be present in >> the storm directory in your nimbus node. >> >> >> In case the above parameter is missing from your storm.yaml file, it is >> likely set to the default value (which may be smaller than your >> topology.jar file). Just add a line setting the value (in bytes) to a value >> large enough to encompass your. >> >> E.g. nimbus.thrift.max_buffer_size: 40000000 >> >> >> Regards, >> >> >> Deb >> >> ------------------------------ >> *From:* researcher cs <[email protected]> >> *Sent:* Monday, September 28, 2015 8:56 AM >> *To:* [email protected] >> *Subject:* Connection refused in submitting topology >> >> I'm new in storm and facing this problem during submitting topology >> >> this is some of data in log file of nimbus >> >> [ERROR] Unexpected exception while invoking! >> java.lang.NullPointerException >> at clojure.lang.Numbers.ops(Numbers.java:942) [ERROR] Unexpected >> exception while invoking! >> java.lang.NullPointerException >> at clojure.lang.Numbers.ops(Numbers.java:942) >> at clojure.lang.Numbers.isPos(Numbers.java:94) >> at clojure.core$take$fn__4112.invoke(core.clj:2500) >> at clojure.lang.LazySeq.sval(LazySeq.java:42) >> at clojure.lang.LazySeq.seq(LazySeq.java:60) >> at clojure.lang.RT.seq(RT.java:473) >> at clojure.core$seq.invoke(core.clj:133) >> at clojure.core$concat$fn__3804.invoke(core.clj:662) >> at clojure.lang.LazySeq.sval(LazySeq.java:42) >> at clojure.lang.LazySeq.seq(LazySeq.java:60) >> at clojure.lang.RT.seq(RT.java:473) >> at clojure.core$seq.invoke(core.clj:133) >> at clojure.core$concat$cat__3806$fn__3807.invoke(core.clj:671) >> at clojure.lang.LazySeq.sval(LazySeq.java:42) >> at clojure.lang.LazySeq.seq(LazySeq.java:60) >> at clojure.lang.RT.seq(RT.java:473) >> at clojure.core$seq.invoke(core.clj:133) >> and this what i have in submitting topology >> >> Exception in thread "main" java.lang.RuntimeException: >> org.apache.thrift7.transport.TTransportException: >> java.net.ConnectException: >> >> Connection refused >> at backtype.storm.utils.NimbusClient.<init>(NimbusClient.java:36) >> at >> backtype.storm.utils.NimbusClient.getConfiguredClient(NimbusClient.java:17) >> at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:69) >> at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:40) >> >> >> > >
