[
https://issues.apache.org/jira/browse/ARTEMIS-450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209608#comment-16209608
]
ASF GitHub Bot commented on ARTEMIS-450:
----------------------------------------
Github user franz1981 commented on the issue:
https://github.com/apache/activemq-artemis/pull/1596
@clebertsuconic I do not agree with that: the change I did was validated
with tests and the documentation says that when the initial delay isn't
specified is defaulted to period, so the expectation on it is that it will be
defaulted to period in a `getInitialDelay` too.
As I said, for me it's the same but IMO if you want to add a new semantic
over a null initialDelay you need to specify it in some way (ie docs/tests
and/or even better, on the API) in order to avoid subtle bugs and/or to avoid
to change tests that are only checking a get after a set.
> Deadlocked broker over addHead and Rollback with AMQP
> -----------------------------------------------------
>
> Key: ARTEMIS-450
> URL: https://issues.apache.org/jira/browse/ARTEMIS-450
> Project: ActiveMQ Artemis
> Issue Type: Bug
> Components: AMQP, Broker
> Affects Versions: 1.2.0
> Reporter: Gordon Sim
> Assignee: clebert suconic
> Fix For: 2.4.0
>
> Attachments: stack-dump.txt, thread-dump-1.3.txt
>
>
> Not sure exactly how it came about, I noticed it on trying to shutdown the
> broker. The log has:
> {noformat}
> 21:43:17,985 WARN [org.apache.activemq.artemis.core.server] AMQ222174: Queue
> examples, on address=myqueue, is taking too long to flush deliveries. Watch
> out for frozen clients.
> 21:43:18,986 WARN [org.apache.activemq.artemis.core.server] AMQ222174: Queue
> examples, on address=myqueue, is taking too long to flush deliveries. Watch
> out for frozen clients.
> 21:43:19,986 WARN [org.apache.activemq.artemis.core.server] AMQ222174: Queue
> examples, on address=myqueue, is taking too long to flush deliveries. Watch
> out for frozen clients.
> 21:43:20,986 WARN [org.apache.activemq.artemis.core.server] AMQ222174: Queue
> examples, on address=myqueue, is taking too long to flush deliveries. Watch
> out for frozen clients.
> 21:43:28,928 WARN [org.apache.activemq.artemis.core.server] AMQ222174: Queue
> examples, on address=myqueue, is taking too long to flush deliveries. Watch
> out for frozen clients.
> 21:43:45,937 WARN [org.apache.activemq.artemis.core.server] AMQ222174: Queue
> examples, on address=myqueue, is taking too long to flush deliveries. Watch
> out for frozen clients.
> 21:44:18,698 WARN [org.apache.activemq.artemis.core.client] AMQ212037:
> Connection failure has been detected: AMQ119014: Did not receive data from
> /127.0.0.1:51232. It is likely the client has exited or crashed without
> closing its connection, or the network between the server and client has
> failed. You also might have configured connection-ttl and
> client-failure-check-period incorrectly. Please check user manual for more
> information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
> 21:44:18,698 WARN [org.apache.activemq.artemis.core.server] AMQ222061:
> Client connection failed, clearing up resources for session
> ebd714e5-efad-11e5-83fc-fe540024bf8d
> Exception in thread "Thread-0
> (ActiveMQ-AIO-poller-pool2081191879-2061347276)" java.lang.Error:
> java.io.IOException: Error while submitting IO: Interrupted system call
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Error while submitting IO: Interrupted system
> call
> at org.apache.activemq.artemis.jlibaio.LibaioContext.blockedPoll(Native
> Method)
> at
> org.apache.activemq.artemis.jlibaio.LibaioContext.poll(LibaioContext.java:360)
> at
> org.apache.activemq.artemis.core.io.aio.AIOSequentialFileFactory$PollerRunnable.run(AIOSequentialFileFactory.java:355)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> ... 2 more
> {noformat}
> I'll attach a thread dump in which you will see Thread-10 has locked the
> handler lock in AbstractConnectionContext
> (part of the 'proton plug'), and is itself blocked on the lock in
> ServerConsumerImpl, which is held by Thread-21. Thread-21 is waiting
> for a write lock on the deliveryLock in ServerConsumerImpl. However
> Thread-20 already has a read lock on this, and is blocked (while
> holding the read lock) on the same handler lock within the proton plug
> (object 0x00000000f3d2bd90) that Thread-10 has locked.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)