Thanks for the reply. How can I check whether bolts are dying?
The scenario happened every time when I started my topology, before kafka
messages are sent out. I don't understand why "Bolts at the destination
dies".

I hope this can be fixed, because this occurred every time.

On Mon, Jul 20, 2015 at 6:31 PM Enno Shioji <[email protected]> wrote:

> It can happen when the bolts at the destination dies. Do you see logs
> indicating that bolts are dying on 15.50.53.52:6703?
>
> If it doesn't happen frequently, I wouldn't worry too much as there are
> mechanisms to cope with bolt deaths. If there is some systemic problem
> (like a skew blowing up the memory and killing bolts again and again), then
> you probably want to fix that.
>
>
> On Mon, Jul 20, 2015 at 11:21 AM, 张炜 <[email protected]> wrote:
>
>> Hi all,
>> I did see the Giving up information, as shown below.
>>
>> May I know why this will happen and how to solve this?
>> I have several bolts with parallelism 50, most of them are successful
>> established, but there are a few like this.
>>
>>
>> 2015-07-20 10:07:01.084 STDIO [ERROR] Jul 20, 2015 10:07:01 AM
>> org.apache.storm.guava.util.concurrent.ExecutionList executeListener
>> SEVERE: RuntimeException while executing runnable
>> org.apache.storm.guava.util.concurrent.Futures$4@1aa49e9a with executor
>> org.apache.storm.guava.util.concurrent.MoreExecutors$SameThreadExecutorService@2da89e20
>> java.lang.RuntimeException: Failed to connect to
>> Netty-Client-c9t07982.itcs.****.com/15.50.53.52:6703
>> at backtype.storm.messaging.netty.Client.connect(Client.java:300)
>> at backtype.storm.messaging.netty.Client.access$1100(Client.java:66)
>> at backtype.storm.messaging.netty.Client$2.reconnectAgain(Client.java:289)
>> at backtype.storm.messaging.netty.Client$2.onSuccess(Client.java:275)
>> at backtype.storm.messaging.netty.Client$2.onSuccess(Client.java:267)
>> at org.apache.storm.guava.util.concurrent.Futures$4.run(Futures.java:1181)
>> at
>> org.apache.storm.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>> at
>> org.apache.storm.guava.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>> at
>> org.apache.storm.guava.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>> at
>> org.apache.storm.guava.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:91)
>> at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:380)
>> at java.util.concurrent.FutureTask.set(FutureTask.java:229)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:270)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.RuntimeException: Giving up to connect to
>> Netty-Client-c9t07982.itcs.****.com/15.50.53.52:6703 after 12 failed
>> attempts
>> at backtype.storm.messaging.netty.Client.connect(Client.java:295)
>> ... 19 more
>>
>>
>> Regards,
>> Sai
>>
>>
>> On Thu, Jul 2, 2015 at 8:53 AM 임정택 <[email protected]> wrote:
>>
>>> Hi.
>>>
>>> Seems like these are logs which occurs in progress of connecting.
>>> (We may want to lower its log level cause it doesn't mean we give up to
>>> connect, just attempting.)
>>>
>>> Your worker should print either "Giving up to connect to ", or "connection
>>> established to ".
>>>
>>> Former means that worker gave up connecting to other worker, and that
>>> worker kills itself.
>>> If you received this log, you may want to check that these are able to
>>> connect each other.
>>>
>>> Latter means worker succeed to connect other worker, so it is safe.
>>>
>>> Hope this helps.
>>>
>>> Thanks,
>>> Jungtaek Lim (HeartSaVioR)
>>>
>>>
>>>
>>> 2015-07-02 8:59 GMT+09:00 张炜 <[email protected]>:
>>>
>>>> Hi all,
>>>> I met strange problems and below are some typical log output.
>>>> Could you please help me know where the problem is? Or could anyone
>>>> guide me which areas that I can start to investigate and where to find
>>>> related materials?
>>>>
>>>> These things look very internal to me that I don't know where to start.
>>>> Thank you very much!
>>>>
>>>> Regards,
>>>> Sai
>>>>
>>>> 2015-07-01T01:42:53.290+0000 b.s.d.executor [INFO] Shutting down executor 
>>>> mycheck:[97 97]
>>>> 2015-07-01T01:42:53.290+0000 b.s.util [INFO] Async loop interrupted!
>>>> 2015-07-01T01:42:53.291+0000 b.s.util [INFO] Async loop interrupted!
>>>> 2015-07-01T01:42:53.291+0000 b.s.d.executor [INFO] Shut down executor 
>>>> mycheck:[97 97]
>>>>
>>>>
>>>> 2015-07-01T01:42:52.868+0000 b.s.m.n.Client [INFO] connection attempt 1 to 
>>>> Netty-Client-localhost/127.0.0.1:6702 scheduled to run in 0 ms
>>>> 2015-07-01T01:42:52.875+0000 b.s.m.n.Client [INFO] connection established 
>>>> to Netty-Client-localhost/127.0.0.1:6702
>>>> 2015-07-01T01:42:52.889+0000 b.s.m.loader [INFO] Shutting down 
>>>> receiving-thread: [Topology-2-1435680709, 6702]
>>>> 2015-07-01T01:42:52.891+0000 b.s.m.n.Client [INFO] closing Netty Client 
>>>> Netty-Client-localhost/127.0.0.1:6702
>>>> 2015-07-01T01:42:52.891+0000 b.s.m.n.Client [INFO] waiting up to 600000 ms 
>>>> to send 1 pending messages to Netty-Client-localhost/127.0.0.1:6702
>>>> 2015-07-01T15:49:30.089+0000 o.a.s.c.r.ExponentialBackoffRetry [WARN] 
>>>> maxRetries too large (300). Pinning to 29
>>>> 2015-07-01T15:49:30.132+0000 o.a.s.c.r.ExponentialBackoffRetry [WARN] 
>>>> maxRetries too large (300). Pinning to 29
>>>> 2015-07-01T15:49:30.148+0000 o.a.s.c.r.ExponentialBackoffRetry [WARN] 
>>>> maxRetries too large (300). Pinning to 29
>>>> 2015-07-01T15:49:30.150+0000 o.a.s.c.r.ExponentialBackoffRetry [WARN] 
>>>> maxRetries too large (300). Pinning to 29
>>>> 2015-07-01T15:49:30.151+0000 o.a.s.c.r.ExponentialBackoffRetry [WARN] 
>>>> maxRetries too large (300). Pinning to 29
>>>> 2015-07-01T15:49:30.268+0000 b.s.m.n.Client [ERROR] connection attempt 1 
>>>> to Netty-Client- failed: java.lang.RuntimeException: Returned channel was 
>>>> actually not established
>>>> 2015-07-01T15:49:30.291+0000 b.s.m.n.Client [ERROR] connection attempt 1 
>>>> to Netty-Client-6703 failed: java.lang.RuntimeException: Returned channel 
>>>> was actually not established
>>>> 2015-07-01T15:49:30.379+0000 b.s.m.n.Client [ERROR] connection attempt 2 
>>>> to Netty-Client- failed: java.lang.RuntimeException: Returned channel was 
>>>> actually not established
>>>> 2015-07-01T15:49:30.886+0000 b.s.m.n.Client [ERROR] connection attempt 2 
>>>> to Netty-Client-/15.50.46.234:6703 failed: java.lang.RuntimeException: 
>>>> Returned channel was actually not established
>>>> 2015-07-01T15:49:30.912+0000 b.s.m.n.Client [ERROR] connection attempt 3 
>>>> to Netty-Client-:6702 failed: java.lang.RuntimeException: Returned channel 
>>>> was actually not established
>>>> 2015-07-01T15:49:31.010+0000 b.s.m.n.Client [ERROR] connection attempt 3 
>>>> to Netty-Client-:6703 failed: java.lang.RuntimeException: Returned channel 
>>>> was actually not established
>>>> 2015-07-01T15:49:31.047+0000 b.s.m.n.Client [ERROR] connection attempt 4 
>>>> to Netty-Client-:6702 failed: java.lang.RuntimeException: Returned channel 
>>>> was actually not established
>>>>
>>>>
>>>
>>>
>>> --
>>> Name : 임 정택
>>> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
>>> Twitter : http://twitter.com/heartsavior
>>> LinkedIn : http://www.linkedin.com/in/heartsavior
>>>
>>
>

Reply via email to