i submitted the topology in distributed mode with localhost
i didn't use anything to shutdown >> The strange thing is i submitted this
topology before without any problems . But now got this issue . Anything,
Should i check it ?

On Thu, Jul 27, 2017 at 9:59 PM, John, Dintu <dintu.j...@searshc.com> wrote:

> Are you using LocalCluster.shutdown or killTopology in the main method
> once you submit the topology? From the logs it looks like that…
>
>
>
>
>
> *Thanks & Regards*
>
> Dintu Alex John
>
>
>
>
>
> *From:* sam mohel [mailto:sammoh...@gmail.com]
> *Sent:* Thursday, July 27, 2017 2:54 PM
> *To:* user@storm.apache.org; s...@apache.org
> *Subject:* Re: Setting heap size parameters by workers.childopts and
> supervisor.childopts
>
>
>
> i forgot to mention that i tried to increase topology.message.timeout.secs
>  to 180 but didn't work too
>
>
>
> On Thu, Jul 27, 2017 at 9:52 PM, sam mohel <sammoh...@gmail.com> wrote:
>
> i tried to use debug . got in the worker.log.err
>
> 2017-07-27 21:47:48,868 FATAL Unable to register shutdown hook because JVM
> is shutting down.
>
>
>
> and this lines from worker.log
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-1:27, stream: __ack_ack, id: {},
> [3247365064986003851 -431522470795602124]
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-1:27, stream: __ack_ack, id: {}, [3247365064986003851
> -431522470795602124]
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Execute done TUPLE source:
> b-1:27, stream: __ack_ack, id: {}, [3247365064986003851
> -431522470795602124] TASK: 1 DELTA: 0
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-1:29, stream: __ack_ack, id: {},
> [3247365064986003851 -6442207219333745818]
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-1:29, stream: __ack_ack, id: {}, [3247365064986003851
> -6442207219333745818]
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Execute done TUPLE source:
> b-1:29, stream: __ack_ack, id: {}, [3247365064986003851
> -6442207219333745818] TASK: 1 DELTA: 0
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 5263752373603294688]
>
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 5263752373603294688]
>
> 2017-07-27 21:47:48.868 b.s.d.worker [INFO] Shutting down worker
> top-1-1501184820 9adf5f4c-dc5b-47b5-a458-40defe84fe9e 6703
>
> 2017-07-27 21:47:48.868 b.s.d.worker [INFO] Shutting down receive thread
>
> 2017-07-27 21:47:48.869 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-1:31, stream: __ack_ack, id: {}, [3247365064986003851
> 4288963968930353157]
>
> 2017-07-27 21:47:48.872 b.s.d.executor [INFO] Execute done TUPLE source:
> b-1:31, stream: __ack_ack, id: {}, [3247365064986003851
> 4288963968930353157] TASK: 1 DELTA: 60
>
> 2017-07-27 21:47:48.872 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 5240959063117469257]
>
> 2017-07-27 21:47:48.872 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 5240959063117469257]
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 5240959063117469257] TASK: 1 DELTA: 1
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 7583382518734849127]
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 7583382518734849127]
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 7583382518734849127] TASK: 1 DELTA: 0
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 6840644970823833210]
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 6840644970823833210]
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 6840644970823833210] TASK: 1 DELTA: 0
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 -6463368911496394080]
>
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> -6463368911496394080]
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> -6463368911496394080] TASK: 1 DELTA: 1
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 764549587969230513]
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 764549587969230513]
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851 764549587969230513]
> TASK: 1 DELTA: 0
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-5:35, stream: __ack_ack, id: {},
> [3247365064986003851 -4632707886455738545]
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> -4632707886455738545]
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Execute done TUPLE source:
> b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> -4632707886455738545] TASK: 1 DELTA: 0
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-5:35, stream: __ack_ack, id: {},
> [3247365064986003851 2993206175355277727]
>
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> 2993206175355277727]
>
> 2017-07-27 21:47:48.875 b.s.d.executor [INFO] Execute done TUPLE source:
> b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> 2993206175355277727] TASK: 1 DELTA: 1
>
> 2017-07-27 21:47:48.898 b.s.m.n.Client [INFO] creating Netty Client,
> connecting to lenovo:6703, bufferSize: 5242880
>
> 2017-07-27 21:47:48.902 b.s.m.loader [INFO] Shutting down
> receiving-thread: [top-1-1501184820, 6703]
>
> 2017-07-27 21:47:48.902 b.s.m.n.Client [INFO] closing Netty Client
> Netty-Client-lenovo/192.168.1.5:6703
> <https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2F192.168.1.5%3A6703&data=01%7C01%7CDintu.John%40searshc.com%7C4f35423d6f0a407ca6a808d4d52939e0%7C27e4c16803234463acad7e124b566726%7C0&sdata=TNXTB4ML7m%2BccFEk%2BIFdkIxZ1QUD7PTNs0i%2F%2Bj4HH%2BA%3D&reserved=0>
>
> 2017-07-27 21:47:48.902 b.s.m.n.Client [INFO] waiting up to 600000 ms to
> send 0 pending messages to Netty-Client-lenovo/192.168.1.5:6703
> <https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2F192.168.1.5%3A6703&data=01%7C01%7CDintu.John%40searshc.com%7C4f35423d6f0a407ca6a808d4d52939e0%7C27e4c16803234463acad7e124b566726%7C0&sdata=TNXTB4ML7m%2BccFEk%2BIFdkIxZ1QUD7PTNs0i%2F%2Bj4HH%2BA%3D&reserved=0>
>
> 2017-07-27 21:47:48.902 b.s.m.loader [INFO] Waiting for
> receiving-thread:[top-1-1501184820, 6703] to die
>
> 2017-07-27 21:47:48.903 b.s.m.loader [INFO] Shutdown receiving-thread:
> [top-1-1501184820, 6703]
>
> 2017-07-27 21:47:48.904 b.s.d.worker [INFO] Shut down receive thread
>
> 2017-07-27 21:47:48.904 b.s.d.worker [INFO] Terminating messaging context
>
> 2017-07-27 21:47:48.904 b.s.d.worker [INFO] Shutting down executors
>
> 2017-07-27 21:47:48.904 b.s.d.executor [INFO] Shutting down executor
> b-0:[8 8]
>
> 2017-07-27 21:47:48.905 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.905 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.906 b.s.d.executor [INFO] Shut down executor b-0:[8 8]
>
> 2017-07-27 21:47:48.906 b.s.d.executor [INFO] Shutting down executor
> b-8:[47 47]
>
> 2017-07-27 21:47:48.907 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.907 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shut down executor b-8:[47
> 47]
>
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shutting down executor
> b-0:[12 12]
>
> 2017-07-27 21:47:48.908 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.908 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shut down executor b-0:[12
> 12]
>
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shutting down executor
> b-8:[54 54]
>
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shut down executor b-8:[54
> 54]
>
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shutting down executor
> b-0:[2 2]
>
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shut down executor b-0:[2 2]
>
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shutting down executor
> b-2:[32 32]
>
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.910 b.s.util [INFO] Async loop interrupted!
>
> 2017-07-27 21:47:48.910 b.s.d.executor [INFO] Shut down executor b-2:[32
> 32]
>
> 2017-07-27 21:47:48.910 b.s.d.executor [INFO] Shutting down executor
> b-8:[41 41]
>
> 2017-07-27 21:47:48.910 b.s.util [INFO] Asy
>
>
>
> On Thu, Jul 27, 2017 at 3:11 PM, Stig Rohde Døssing <s...@apache.org>
> wrote:
>
> Yes, there is topology.message.timeout.secs for setting how long the
> topology has to process a message after it is emitted from the spout, and
> topology.enable.message.timeouts if you want to disable timeouts
> entirely. I'm assuming that's what you're asking?
>
>
>
> 2017-07-27 15:03 GMT+02:00 sam mohel <sammoh...@gmail.com>:
>
> Thanks for your patience and time. I will use debug now . But is there any
> settings or configurations about the time for spout? How can I increase it
> to try ?
>
>
> On Thursday, July 27, 2017, Stig Rohde Døssing <s...@apache.org> wrote:
> > Last message accidentally went to you directly instead of the mailing
> list.
> >
> > Never mind what I wrote about worker slots. I think you should check
> that all tuples are being acked first. Then you might want to try enabling
> debug logging. You should also verify that your spout is emitting all the
> expected tuples. Since you're talking about a result file, I'm assuming
> your spout output is limited.
> >
> > 2017-07-27 10:36 GMT+02:00 Stig Rohde Døssing <s...@apache.org>:
> >>
> >> Okay. Unless you're seeing out of memory errors or know that your
> garbage collector is thrashing, I don't know why changing your xmx would
> help. Without knowing more about your topology it's hard to say what's
> going wrong. I think your best bet is to enable debug logging and try to
> figure out what happens when the topology stops writing to your result
> file. When you run your topology on a distributed cluster, you can use
> Storm UI to verify that all your tuples are being acked, maybe your tuple
> trees are not being acked correctly?
> >>
> >> Multiple topologies shouldn't be interfering with each other, the only
> thing I can think of is if you have too few worker slots and some of your
> topology's components are not being assigned to a worker. You can see this
> as well in Storm UI.
> >>
> >> 2017-07-27 <20%2017%2007%2027> 8:11 GMT+02:00 sam mohel <
> sammoh...@gmail.com>:
> >>>
> >>> Yes I tried 2048 and 4096 to make worker more size but same problem .
> >>>
> >>> I have result file . It should contains the result of my processing .
> The size of this file should be 7 mb but what I got after sumbit the
> topology 50 kb only .
> >>>
> >>> I submitted this toplogy before . Since 4 months . But when I
> submitted it now I got this problem .
> >>>
> >>> How the toplogy working well before but now not ?
> >>>
> >>> Silly question and sorry for that
> >>> I submitted three topology except that one . Is that make memory weak
> ? Or should I clean something after that
> >>>
> >>> On Thursday, July 27, 2017, Stig Rohde Døssing <s...@apache.org>
> wrote:
> >>> > As far as I can tell the default xmx for workers in 0.10.2 is 768
> megs (https://github.com/apache/storm/blob/v0.10.2/conf/defaults.yaml#L134
> <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fstorm%2Fblob%2Fv0.10.2%2Fconf%2Fdefaults.yaml%23L134&data=01%7C01%7CDintu.John%40searshc.com%7C4f35423d6f0a407ca6a808d4d52939e0%7C27e4c16803234463acad7e124b566726%7C0&sdata=2FbjUiuZzBiLXlGerVR5zVn6bKBhSqu763vHKE8XnpA%3D&reserved=0>),
> your supervisor logs shows the following:
> >>> > "Launching worker with command: <snip> -Xmx2048m". Is this the right
> configuration?
> >>> >
> >>> > Regarding the worker log, it looks like the components are
> initialized correctly, all the bolts report that they're done running
> prepare(). Could you explain what you expect the logs to look like and what
> you expect to happen when you run the topology?
> >>> >
> >>> > It's sometimes helpful to enable debug logging if your topology acts
> strange, consider trying that by setting
> >>> > Config conf = new Config();
> >>> > conf.setDebug(true);
> >>> >
> >>> > 2017-07-27 <20%2017%2007%2027> 1:43 GMT+02:00 sam mohel <
> sammoh...@gmail.com>:
> >>> >>
> >>> >> Same problem with distributed mode . I tried to submit toplogy in
> distributed with localhost and attached log files of worker and supervisor
> >>> >>
> >>> >>
> >>> >>
> >>> >> On Thursday, July 27, 2017, sam mohel <sammoh...@gmail.com> wrote:
> >>> >> > I submit my topology by this command
> >>> >> > mvn package
> >>> >> > mvn compile exec:java -Dexec.classpathScope=compile
> -Dexec.mainClass=trident.Topology
> >>> >> >   and i copied those lines
> >>> >> > 11915 [Thread-47-b-4] INFO  b.s.d.executor - Prepared bolt
> b-4:(40)
> >>> >> > 11912 [Thread-111-b-2] INFO  b.s.d.executor - Prepared bolt
> b-2:(14)
> >>> >> > 11934 [Thread-103-b-5] INFO  b.s.d.executor - Prepared bolt
> b-5:(45)
> >>> >> > sam@lenovo:~/first-topology$
> >>> >> > from what i saw in terminal . I checked the size of the result
> file and found it's 50 KB each time i submit it .
> >>> >> > what should i check ?
> >>> >> > On Wed, Jul 26, 2017 at 9:05 PM, Bobby Evans <ev...@yahoo-inc.com>
> wrote:
> >>> >> >>
> >>> >> >> Local mode is totally separate and there are no processes
> launched except the original one.  Those values are ignored in local mode.
> >>> >> >>
> >>> >> >>
> >>> >> >> - Bobby
> >>> >> >>
> >>> >> >>
> >>> >> >> On Wednesday, July 26, 2017, 2:01:52 PM CDT, sam mohel <
> sammoh...@gmail.com> wrote:
> >>> >> >>
> >>> >> >> Thanks so much for replying , i tried to submit topology in
> local mode ... i increased size of worker like
> >>> >> >> conf.put(Config.TOPOLOGY_WORKER_CHILDOPTS,"-Xmx4096m" );
> >>> >> >>
> >>> >> >> but got in terminal
> >>> >> >> 11920 [Thread-121-b-4] INFO  b.s.d.executor - Preparing bolt
> b-4:(25)
> >>> >> >> 11935 [Thread-121-b-4] INFO  b.s.d.executor - Prepared bolt
> b-4:(25)
> >>> >> >> 11920 [Thread-67-b-5] INFO  b.s.d.executor - Preparing bolt
> b-5:(48)
> >>> >> >> 11936 [Thread-67-b-5] INFO  b.s.d.executor - Prepared bolt
> b-5:(48)
> >>> >> >> 11919 [Thread-105-b-2] INFO  b.s.d.executor - Prepared bolt
> b-2:(10)
> >>> >> >> 11915 [Thread-47-b-4] INFO  b.s.d.executor - Prepared bolt
> b-4:(40)
> >>> >> >> 11912 [Thread-111-b-2] INFO  b.s.d.executor - Prepared bolt
> b-2:(14)
> >>> >> >> 11934 [Thread-103-b-5] INFO  b.s.d.executor - Prepared bolt
> b-5:(45)
> >>> >> >> sam@lenovo:~/first-topology$
> >>> >> >> and didn't complete processing . the size of the result is 50 KB
> . This topology was working well without any problems . But when i tried to
> submit it now , i didn't get the full result
> >>> >> >>
> >>> >> >> On Wed, Jul 26, 2017 at 8:35 PM, Bobby Evans <
> ev...@yahoo-inc.com> wrote:
> >>> >> >>
> >>> >> >> worker.childops is the default value that is set by the system
> administrator in storm.yaml on each of the supervisor nodes.
> topology.worker.childopts is what you set in your topology conf if you want
> to add something more to the command line.
> >>> >> >>
> >>> >> >>
> >>> >> >> - Bobby
> >>> >> >>
> >>> >> >>
> >>> >> >> On Tuesday, July 25, 2017, 11:50:04 PM CDT, sam mohel <
> sammoh...@gmail.com> wrote:
> >>> >> >>
> >>> >> >> i'm using 0.10.2 version . i tried to write in the code
> >>> >> >> conf.put(Config.WORKER_ CHILDOPTS, "-Xmx4g");
> >>> >> >> conf.put(Config.SUPERVISOR_ CHILDOPTS, "-Xmx4g");
> >>> >> >>
> >>> >> >> but i didn't touch any affect . Did i write the right
> configurations ?
> >>> >> >> Does this value is the largest ?
> >>> >> >>
> >>> >> >
> >>> >> >
> >>> >
> >
> >
>
>
>
>
>
>
> This message, including any attachments, is the property of Sears Holdings
> Corporation and/or one of its subsidiaries. It is confidential and may
> contain proprietary or legally privileged information. If you are not the
> intended recipient, please delete it without reading the contents. Thank
> you.
>

Reply via email to