Thanks for your patience and time. I will use debug now . But is there any
settings or configurations about the time for spout? How can I increase it
to try ?

On Thursday, July 27, 2017, Stig Rohde Døssing <[email protected]> wrote:
> Last message accidentally went to you directly instead of the mailing
list.
>
> Never mind what I wrote about worker slots. I think you should check that
all tuples are being acked first. Then you might want to try enabling debug
logging. You should also verify that your spout is emitting all the
expected tuples. Since you're talking about a result file, I'm assuming
your spout output is limited.
>
> 2017-07-27 10:36 GMT+02:00 Stig Rohde Døssing <[email protected]>:
>>
>> Okay. Unless you're seeing out of memory errors or know that your
garbage collector is thrashing, I don't know why changing your xmx would
help. Without knowing more about your topology it's hard to say what's
going wrong. I think your best bet is to enable debug logging and try to
figure out what happens when the topology stops writing to your result
file. When you run your topology on a distributed cluster, you can use
Storm UI to verify that all your tuples are being acked, maybe your tuple
trees are not being acked correctly?
>>
>> Multiple topologies shouldn't be interfering with each other, the only
thing I can think of is if you have too few worker slots and some of your
topology's components are not being assigned to a worker. You can see this
as well in Storm UI.
>>
>> 2017-07-27 8:11 GMT+02:00 sam mohel <[email protected]>:
>>>
>>> Yes I tried 2048 and 4096 to make worker more size but same problem .
>>>
>>> I have result file . It should contains the result of my processing .
The size of this file should be 7 mb but what I got after sumbit the
topology 50 kb only .
>>>
>>> I submitted this toplogy before . Since 4 months . But when I submitted
it now I got this problem .
>>>
>>> How the toplogy working well before but now not ?
>>>
>>> Silly question and sorry for that
>>> I submitted three topology except that one . Is that make memory weak ?
Or should I clean something after that
>>>
>>> On Thursday, July 27, 2017, Stig Rohde Døssing <[email protected]> wrote:
>>> > As far as I can tell the default xmx for workers in 0.10.2 is 768
megs (https://github.com/apache/storm/blob/v0.10.2/conf/defaults.yaml#L134),
your supervisor logs shows the following:
>>> > "Launching worker with command: <snip> -Xmx2048m". Is this the right
configuration?
>>> >
>>> > Regarding the worker log, it looks like the components are
initialized correctly, all the bolts report that they're done running
prepare(). Could you explain what you expect the logs to look like and what
you expect to happen when you run the topology?
>>> >
>>> > It's sometimes helpful to enable debug logging if your topology acts
strange, consider trying that by setting
>>> > Config conf = new Config();
>>> > conf.setDebug(true);
>>> >
>>> > 2017-07-27 1:43 GMT+02:00 sam mohel <[email protected]>:
>>> >>
>>> >> Same problem with distributed mode . I tried to submit toplogy in
distributed with localhost and attached log files of worker and supervisor
>>> >>
>>> >>
>>> >>
>>> >> On Thursday, July 27, 2017, sam mohel <[email protected]> wrote:
>>> >> > I submit my topology by this command
>>> >> > mvn package
>>> >> > mvn compile exec:java -Dexec.classpathScope=compile
-Dexec.mainClass=trident.Topology
>>> >> >   and i copied those lines
>>> >> > 11915 [Thread-47-b-4] INFO  b.s.d.executor - Prepared bolt b-4:(40)
>>> >> > 11912 [Thread-111-b-2] INFO  b.s.d.executor - Prepared bolt
b-2:(14)
>>> >> > 11934 [Thread-103-b-5] INFO  b.s.d.executor - Prepared bolt
b-5:(45)
>>> >> > sam@lenovo:~/first-topology$
>>> >> > from what i saw in terminal . I checked the size of the result
file and found it's 50 KB each time i submit it .
>>> >> > what should i check ?
>>> >> > On Wed, Jul 26, 2017 at 9:05 PM, Bobby Evans <[email protected]>
wrote:
>>> >> >>
>>> >> >> Local mode is totally separate and there are no processes
launched except the original one.  Those values are ignored in local mode.
>>> >> >>
>>> >> >>
>>> >> >> - Bobby
>>> >> >>
>>> >> >>
>>> >> >> On Wednesday, July 26, 2017, 2:01:52 PM CDT, sam mohel <
[email protected]> wrote:
>>> >> >>
>>> >> >> Thanks so much for replying , i tried to submit topology in local
mode ... i increased size of worker like
>>> >> >> conf.put(Config.TOPOLOGY_WORKER_CHILDOPTS,"-Xmx4096m" );
>>> >> >>
>>> >> >> but got in terminal
>>> >> >> 11920 [Thread-121-b-4] INFO  b.s.d.executor - Preparing bolt
b-4:(25)
>>> >> >> 11935 [Thread-121-b-4] INFO  b.s.d.executor - Prepared bolt
b-4:(25)
>>> >> >> 11920 [Thread-67-b-5] INFO  b.s.d.executor - Preparing bolt
b-5:(48)
>>> >> >> 11936 [Thread-67-b-5] INFO  b.s.d.executor - Prepared bolt
b-5:(48)
>>> >> >> 11919 [Thread-105-b-2] INFO  b.s.d.executor - Prepared bolt
b-2:(10)
>>> >> >> 11915 [Thread-47-b-4] INFO  b.s.d.executor - Prepared bolt
b-4:(40)
>>> >> >> 11912 [Thread-111-b-2] INFO  b.s.d.executor - Prepared bolt
b-2:(14)
>>> >> >> 11934 [Thread-103-b-5] INFO  b.s.d.executor - Prepared bolt
b-5:(45)
>>> >> >> sam@lenovo:~/first-topology$
>>> >> >> and didn't complete processing . the size of the result is 50 KB
. This topology was working well without any problems . But when i tried to
submit it now , i didn't get the full result
>>> >> >>
>>> >> >> On Wed, Jul 26, 2017 at 8:35 PM, Bobby Evans <[email protected]>
wrote:
>>> >> >>
>>> >> >> worker.childops is the default value that is set by the system
administrator in storm.yaml on each of the supervisor nodes.
topology.worker.childopts is what you set in your topology conf if you want
to add something more to the command line.
>>> >> >>
>>> >> >>
>>> >> >> - Bobby
>>> >> >>
>>> >> >>
>>> >> >> On Tuesday, July 25, 2017, 11:50:04 PM CDT, sam mohel <
[email protected]> wrote:
>>> >> >>
>>> >> >> i'm using 0.10.2 version . i tried to write in the code
>>> >> >> conf.put(Config.WORKER_ CHILDOPTS, "-Xmx4g");
>>> >> >> conf.put(Config.SUPERVISOR_ CHILDOPTS, "-Xmx4g");
>>> >> >>
>>> >> >> but i didn't touch any affect . Did i write the right
configurations ?
>>> >> >> Does this value is the largest ?
>>> >> >>
>>> >> >
>>> >> >
>>> >
>
>

Reply via email to