[ 
https://issues.apache.org/jira/browse/ARTEMIS-450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209548#comment-16209548
 ] 

ASF GitHub Bot commented on ARTEMIS-450:
----------------------------------------

Github user franz1981 commented on a diff in the pull request:

    https://github.com/apache/activemq-artemis/pull/1596#discussion_r145455314
  
    --- Diff: 
artemis-commons/src/main/java/org/apache/activemq/artemis/core/server/ActiveMQScheduledComponent.java
 ---
    @@ -90,7 +90,7 @@ public 
ActiveMQScheduledComponent(ScheduledExecutorService scheduledExecutorServ
                                          long checkPeriod,
                                          TimeUnit timeUnit,
                                          boolean onDemand) {
    -      this(scheduledExecutorService, executor, checkPeriod, checkPeriod, 
timeUnit, onDemand);
    +      this(scheduledExecutorService, executor, -1, checkPeriod, timeUnit, 
onDemand);
    --- End diff --
    
    I'm receiving an error in the test `testVerifyDefaultInitialDelay`:
    ```
    java.lang.AssertionError: The initial delay must be defaulted to the period 
    Expected :100
    Actual   :-1
    ```
    Modifying things like this (and leaving the constructor as it is) doesn't 
break any tests:
    ```
       // this will restart the scheduled component upon changes
       private void restartIfNeeded() {
          if (isStarted()) {
             stop();
             //do not need to start with the initialDelay: the component was 
already running
             start(this.period);
          }
       }
    
       private void start(final long initialDelay) {
          if (future != null) {
             // already started
             return;
          }
    
          if (scheduledExecutorService == null) {
             scheduledExecutorService = new ScheduledThreadPoolExecutor(1, 
getThreadFactory());
             startedOwnScheduler = true;
    
          }
    
          if (onDemand) {
             return;
          }
    
          this.millisecondsPeriod = timeUnit.convert(period, 
TimeUnit.MILLISECONDS);
    
          if (period >= 0) {
             future = 
scheduledExecutorService.scheduleWithFixedDelay(runForScheduler, initialDelay, 
period, timeUnit);
          } else {
             logger.tracef("did not start scheduled executor on %s because 
period was configured as %d", this, period);
          }
       }
    
       @Override
       public synchronized void start() {
          start(this.initialDelay);
       }
    ```


> Deadlocked broker over addHead and Rollback with AMQP
> -----------------------------------------------------
>
>                 Key: ARTEMIS-450
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-450
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: AMQP, Broker
>    Affects Versions: 1.2.0
>            Reporter: Gordon Sim
>            Assignee: clebert suconic
>             Fix For: 2.4.0
>
>         Attachments: stack-dump.txt, thread-dump-1.3.txt
>
>
> Not sure exactly how it came about, I noticed it on trying to shutdown the 
> broker. The log has:
> {noformat}
> 21:43:17,985 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:18,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:19,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:20,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:28,928 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:43:45,937 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue 
> examples, on address=myqueue, is taking too long to flush deliveries. Watch 
> out for frozen clients.
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.client] AMQ212037: 
> Connection failure has been detected: AMQ119014: Did not receive data from 
> /127.0.0.1:51232. It is likely the client has exited or crashed without 
> closing its connection, or the network between the server and client has 
> failed. You also might have configured connection-ttl and 
> client-failure-check-period incorrectly. Please check user manual for more 
> information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.server] AMQ222061: 
> Client connection failed, clearing up resources for session 
> ebd714e5-efad-11e5-83fc-fe540024bf8d
> Exception in thread "Thread-0 
> (ActiveMQ-AIO-poller-pool2081191879-2061347276)" java.lang.Error: 
> java.io.IOException: Error while submitting IO: Interrupted system call
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Error while submitting IO: Interrupted system 
> call
>       at org.apache.activemq.artemis.jlibaio.LibaioContext.blockedPoll(Native 
> Method)
>       at 
> org.apache.activemq.artemis.jlibaio.LibaioContext.poll(LibaioContext.java:360)
>       at 
> org.apache.activemq.artemis.core.io.aio.AIOSequentialFileFactory$PollerRunnable.run(AIOSequentialFileFactory.java:355)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       ... 2 more
> {noformat}
> I'll attach a thread dump in which you will see Thread-10 has locked the 
> handler lock in AbstractConnectionContext
> (part of the 'proton plug'), and is itself blocked on the lock in
> ServerConsumerImpl, which is held by Thread-21. Thread-21 is waiting
> for a write lock on the deliveryLock in ServerConsumerImpl. However
> Thread-20 already has a read lock on this, and is blocked (while
> holding the read lock) on the same handler lock within the proton plug
> (object 0x00000000f3d2bd90) that Thread-10 has locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to