[jira] [Commented] (AMQ-5379) AMQP - allow setting prefetch size
[ https://issues.apache.org/jira/browse/AMQ-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234150#comment-14234150 ] Dejan Bosanac commented on AMQ-5379: I pushed the change that sets zero prefetch when nothing else is specified, and then use the first flow packet to adjust it. After upgrade to qpid 0.30 [AMQ-5405], all tests looks good. The commit also contains Robbie's patch. AMQP - allow setting prefetch size -- Key: AMQ-5379 URL: https://issues.apache.org/jira/browse/AMQ-5379 Project: ActiveMQ Issue Type: Bug Components: AMQP Affects Versions: 5.10.0 Reporter: Dejan Bosanac Assignee: Dejan Bosanac Fix For: 5.11.0 Attachments: 0001-AMQ-5379-inspect-the-actual-total-credit-rather-than.patch Currently the prefetch size is hardcoded to the value of 100 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (AMQ-5438) Archiving mKahaDB log files cause broker restart
[ https://issues.apache.org/jira/browse/AMQ-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dejan Bosanac reassigned AMQ-5438: -- Assignee: Dejan Bosanac Archiving mKahaDB log files cause broker restart Key: AMQ-5438 URL: https://issues.apache.org/jira/browse/AMQ-5438 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.10.0 Environment: OSX Reporter: Jakub Korab Assignee: Dejan Bosanac When using the following config, the broker service seems to restart every journal archive interval (30s): {code:xml} persistenceAdapter mKahaDB directory=${activemq.base}/data/kahadb filteredPersistenceAdapters filteredKahaDB queue=DLQ. persistenceAdapter !-- keep small journals, and delete them when consumed -- kahaDB journalMaxFileLength=4mb archiveDataLogs=false/ /persistenceAdapter /filteredKahaDB filteredKahaDB queue= persistenceAdapter !-- keep larger journals, and archive them when consumed -- kahaDB journalMaxFileLength=32mb archiveDataLogs=true/ /persistenceAdapter /filteredKahaDB !-- match all other destinations (topics); needed for completeness -- filteredKahaDB persistenceAdapter kahaDB enableJournalDiskSyncs=false/ /persistenceAdapter /filteredKahaDB /filteredPersistenceAdapters /mKahaDB /persistenceAdapter {code} The log files show the following exception: {noformat} java.io.IOException: Failed to move /opt/amq-1/data/kahadb/queue#3a#2f#2f#3e/db-2.log to data-archive at org.apache.activemq.util.IOHelper.moveFile(IOHelper.java:189)[activemq-broker-5.10.0.jar:5.10.0] at org.apache.activemq.store.kahadb.disk.journal.DataFile.move(DataFile.java:84)[activemq-kahadb-store-5.10.0.jar:5.10.0] at org.apache.activemq.store.kahadb.disk.journal.Journal.forceRemoveDataFile(Journal.java:437)[activemq-kahadb-store-5.10.0.jar:5.10.0] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Jenkins build became unstable: ActiveMQ » ActiveMQ :: AMQP #1560
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-amqp/1560/changes
Jenkins build is back to stable : ActiveMQ » ActiveMQ :: STOMP Protocol #1559
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-stomp/1559/
Jenkins build is back to stable : ActiveMQ » ActiveMQ :: RA #1559
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-ra/1559/
Jenkins build is back to stable : ActiveMQ » ActiveMQ :: LevelDB Store #1559
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-leveldb-store/1559/
[GitHub] activemq pull request: Fixed AMQ-5438. Added code to create KahaDB...
Github user asfgit closed the pull request at: https://github.com/apache/activemq/pull/50 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (AMQ-5438) Archiving mKahaDB log files cause broker restart
[ https://issues.apache.org/jira/browse/AMQ-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234185#comment-14234185 ] ASF GitHub Bot commented on AMQ-5438: - Github user asfgit closed the pull request at: https://github.com/apache/activemq/pull/50 Archiving mKahaDB log files cause broker restart Key: AMQ-5438 URL: https://issues.apache.org/jira/browse/AMQ-5438 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.10.0 Environment: OSX Reporter: Jakub Korab Assignee: Dejan Bosanac When using the following config, the broker service seems to restart every journal archive interval (30s): {code:xml} persistenceAdapter mKahaDB directory=${activemq.base}/data/kahadb filteredPersistenceAdapters filteredKahaDB queue=DLQ. persistenceAdapter !-- keep small journals, and delete them when consumed -- kahaDB journalMaxFileLength=4mb archiveDataLogs=false/ /persistenceAdapter /filteredKahaDB filteredKahaDB queue= persistenceAdapter !-- keep larger journals, and archive them when consumed -- kahaDB journalMaxFileLength=32mb archiveDataLogs=true/ /persistenceAdapter /filteredKahaDB !-- match all other destinations (topics); needed for completeness -- filteredKahaDB persistenceAdapter kahaDB enableJournalDiskSyncs=false/ /persistenceAdapter /filteredKahaDB /filteredPersistenceAdapters /mKahaDB /persistenceAdapter {code} The log files show the following exception: {noformat} java.io.IOException: Failed to move /opt/amq-1/data/kahadb/queue#3a#2f#2f#3e/db-2.log to data-archive at org.apache.activemq.util.IOHelper.moveFile(IOHelper.java:189)[activemq-broker-5.10.0.jar:5.10.0] at org.apache.activemq.store.kahadb.disk.journal.DataFile.move(DataFile.java:84)[activemq-kahadb-store-5.10.0.jar:5.10.0] at org.apache.activemq.store.kahadb.disk.journal.Journal.forceRemoveDataFile(Journal.java:437)[activemq-kahadb-store-5.10.0.jar:5.10.0] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMQ-5438) Archiving mKahaDB log files cause broker restart
[ https://issues.apache.org/jira/browse/AMQ-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dejan Bosanac resolved AMQ-5438. Resolution: Fixed Fix Version/s: 5.11.0 Patch applied. Thanks! Archiving mKahaDB log files cause broker restart Key: AMQ-5438 URL: https://issues.apache.org/jira/browse/AMQ-5438 Project: ActiveMQ Issue Type: Bug Components: Message Store Affects Versions: 5.10.0 Environment: OSX Reporter: Jakub Korab Assignee: Dejan Bosanac Fix For: 5.11.0 When using the following config, the broker service seems to restart every journal archive interval (30s): {code:xml} persistenceAdapter mKahaDB directory=${activemq.base}/data/kahadb filteredPersistenceAdapters filteredKahaDB queue=DLQ. persistenceAdapter !-- keep small journals, and delete them when consumed -- kahaDB journalMaxFileLength=4mb archiveDataLogs=false/ /persistenceAdapter /filteredKahaDB filteredKahaDB queue= persistenceAdapter !-- keep larger journals, and archive them when consumed -- kahaDB journalMaxFileLength=32mb archiveDataLogs=true/ /persistenceAdapter /filteredKahaDB !-- match all other destinations (topics); needed for completeness -- filteredKahaDB persistenceAdapter kahaDB enableJournalDiskSyncs=false/ /persistenceAdapter /filteredKahaDB /filteredPersistenceAdapters /mKahaDB /persistenceAdapter {code} The log files show the following exception: {noformat} java.io.IOException: Failed to move /opt/amq-1/data/kahadb/queue#3a#2f#2f#3e/db-2.log to data-archive at org.apache.activemq.util.IOHelper.moveFile(IOHelper.java:189)[activemq-broker-5.10.0.jar:5.10.0] at org.apache.activemq.store.kahadb.disk.journal.DataFile.move(DataFile.java:84)[activemq-kahadb-store-5.10.0.jar:5.10.0] at org.apache.activemq.store.kahadb.disk.journal.Journal.forceRemoveDataFile(Journal.java:437)[activemq-kahadb-store-5.10.0.jar:5.10.0] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] activemq-6 pull request: ACTIVEMQ6-11 - fix distribution build for...
Github user asfgit closed the pull request at: https://github.com/apache/activemq-6/pull/31 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (ACTIVEMQ6-11) Fix the distribution build
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-11?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234194#comment-14234194 ] ASF GitHub Bot commented on ACTIVEMQ6-11: - Github user asfgit closed the pull request at: https://github.com/apache/activemq-6/pull/31 Fix the distribution build --- Key: ACTIVEMQ6-11 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-11 Project: Apache ActiveMQ 6 Issue Type: Improvement Reporter: Martyn Taylor Assignee: Andy Taylor Fix For: 6.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMQ-5475) AMQP transport does not send correct response to unsupported clients
Timothy Bish created AMQ-5475: - Summary: AMQP transport does not send correct response to unsupported clients Key: AMQ-5475 URL: https://issues.apache.org/jira/browse/AMQ-5475 Project: ActiveMQ Issue Type: Bug Components: AMQP Affects Versions: 5.10.0, 5.9.1 Reporter: Timothy Bish Assignee: Timothy Bish Fix For: 5.11.0 Section 2.2 of the AMQP 1.0 spec covers protocol negotiation and states that when a client connects and send an AMQP header with an unsupported version the broker should send back an AMQP header with the version supported before closing the socket. This also applied if the header cannot be parsed, such as a HTTP client etc connecting to the AMQP transport port. We currently just close the socket which does not give the connecting client any insight into why the connect failed and allow it to try again with the correct protocol if supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ACTIVEMQ6-36) Disallow use of SSLv3 to protect against POODLE
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-36?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234311#comment-14234311 ] Torsten Mielke commented on ACTIVEMQ6-36: - Using this configuration should disable SSLv3 in the brokers transport connector {code} transportConnector name=ssl uri=ssl:localhost:61617?transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2/ {code} Additional configuration is needed for the web console. Disallow use of SSLv3 to protect against POODLE --- Key: ACTIVEMQ6-36 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-36 Project: Apache ActiveMQ 6 Issue Type: Bug Reporter: Justin Bertram Assignee: Justin Bertram Fix For: 6.0.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5342) Deadlock in network of brokers simple architecture
[ https://issues.apache.org/jira/browse/AMQ-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234639#comment-14234639 ] Tim Bain commented on AMQ-5342: --- Is there a reason that you've got two duplex connections between the brokers, one initiated by each broker? Does the problem still occur if you only use a single duplex connection initiated by one broker, or if you use two non-duplex connections? Deadlock in network of brokers simple architecture -- Key: AMQ-5342 URL: https://issues.apache.org/jira/browse/AMQ-5342 Project: ActiveMQ Issue Type: Bug Components: Transport Affects Versions: 5.9.1, 5.10.0 Environment: Ubuntu 12.04.5 LTS java version 1.7.0_60 Java(TM) SE Runtime Environment (build 1.7.0_60-b19) Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode) Reporter: Claudio Santana Priority: Critical Attachments: activemq-b1.xml, activemq-b2.xml, deadlock.txt With the following topology: producer (1+ threads) -- broker1 broker2 --- consumer (2+ threads) The producer has to be started up 1st and allow to enqueue a few messages, then the consumer threads are started. The consumer shows messages to have received only one or two messages while the producer keeps on enqueueing messages normally. The scenario doesn't happen when the consumer threads are started 1st then the producers but if the consumers are disconnected and then reconnected the issue is visible again. I'm using the swiss army open wire example clients. Consumer starting command line: ant consumer -Durl=tcp://localhost:61627 -Dtopic=false -Dmax=200 -DparallelThreads=2 -DsleepTime=100 Producer starting command line: ant producer -Durl=tcp://localhost:61626 -Dtopic=false -Dmax=400 -DparallelThreads=1 -DsleepTime=100 -DmessageSize=1000 Taking a thread dump I can see a deadlock. look at the attached file for details. I also attached the brokers configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMQ-5338) Run into Memory Leak problem after upgrade from 5.5.1 to 5.10
[ https://issues.apache.org/jira/browse/AMQ-5338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234645#comment-14234645 ] Tim Bain commented on AMQ-5338: --- What GC strategy are you using? And which JVM implementation? (Hotspot, OpenJDK, IBM, etc.) And are you sure that your full GCs were actually *full* GCs rather than just GCs that collected from Old Gen? Using G1GC in Hotspot 7u21, I've seen behavior like what you described (with an upward-sloping sawtooth pattern for Old Gen usage), but once a real full GC happened, Old Gen dropped to nearly nothing and then started the pattern again, which makes me think that what I saw was a poor implementation of the G1GC algorithm rather than anything wrong with ActiveMQ. Did you actually let things run until you hit an OutOfMemoryError? If not, you're just speculating that a memory leak actually existed, because the GC strategy is under no obligation to collect everything that can be GCed at any point in time. Run into Memory Leak problem after upgrade from 5.5.1 to 5.10 - Key: AMQ-5338 URL: https://issues.apache.org/jira/browse/AMQ-5338 Project: ActiveMQ Issue Type: Bug Affects Versions: 5.9.0, 5.10.0 Environment: Windows 2012 R2, JDK 1.7.40, CPU I5, Memory 8G Reporter: Edwin Wang Our production environment has used ActiveMQ 5.5.1 for a long time and it works well. Recently we decided to verify ActiveMQ 5.10 for prepared for upgrade. However, after we upgrade the test environment to 5.10, the MQ is easy ran into out of memory every 7-10 days. In order to compare the two, I use the same environment and same configuration for both 5.5.1 and 5.10.0. Finally, 5.10.0 ran into out of memory while 5.5.1 works well. It looks like a bug for 5.10. Below is the main part of the configration destinationPolicy policyMap policyEntries policyEntry topic= advisoryForConsumed=false advisoryForDelivery=false gcInactiveDestinations=true inactiveTimoutBeforeGC=60 producerFlowControl=true memoryLimit=3mb pendingSubscriberPolicy vmCursor / /pendingSubscriberPolicy /policyEntry policyEntry queue= advisoryForConsumed=false advisoryForDelivery=false gcInactiveDestinations=true inactiveTimoutBeforeGC=60 producerFlowControl=true memoryLimit=3mb /policyEntry /policyEntries /policyMap /destinationPolicy managementContext managementContext createConnector=true/ /managementContext plugins simpleAuthenticationPlugin users authenticationUser username=msgmanager password=Br0c@d3s0ft groups=admins/ /users /simpleAuthenticationPlugin authorizationPlugin map authorizationMap authorizationEntries authorizationEntry queue= read=admins write=admins admin=admins / authorizationEntry topic= read=admins write=admins admin=admins / /authorizationEntries /authorizationMap /map /authorizationPlugin /plugins -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Jenkins build became unstable: ActiveMQ » ActiveMQ :: LevelDB Store #1561
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-leveldb-store/1561/
Jenkins build became unstable: ActiveMQ » ActiveMQ :: RA #1561
See https://builds.apache.org/job/ActiveMQ/org.apache.activemq$activemq-ra/1561/
[jira] [Updated] (AMQ-5473) Race condition caused by Linkstealing might make durable subs inactive
[ https://issues.apache.org/jira/browse/AMQ-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AR updated AMQ-5473: Attachment: (was: linksteal_durablesub.patch) Race condition caused by Linkstealing might make durable subs inactive -- Key: AMQ-5473 URL: https://issues.apache.org/jira/browse/AMQ-5473 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.11.0 Reporter: AR Link Stealing creates a new connection and disconnects the old connection. These operations are done concurrently. New connection triggers addConsumer() for active subscriptions and old connection stop causes removeConsumer() for the same. Problems: * addConsumer() would throw exception that the sub is already active (if remove did not happen before) * even if we go past the exception, it will not set the right (new) connection context and consumer info the subs * removeConsumer() may remove subcription even if it had different connection context (created by linkstealing) Patch attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMQ-5473) Race condition caused by Linkstealing might make durable subs inactive
[ https://issues.apache.org/jira/browse/AMQ-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AR updated AMQ-5473: Attachment: linksteal_durablesub.patch patch updated. Race condition caused by Linkstealing might make durable subs inactive -- Key: AMQ-5473 URL: https://issues.apache.org/jira/browse/AMQ-5473 Project: ActiveMQ Issue Type: Bug Components: Broker Affects Versions: 5.11.0 Reporter: AR Attachments: linksteal_durablesub.patch Link Stealing creates a new connection and disconnects the old connection. These operations are done concurrently. New connection triggers addConsumer() for active subscriptions and old connection stop causes removeConsumer() for the same. Problems: * addConsumer() would throw exception that the sub is already active (if remove did not happen before) * even if we go past the exception, it will not set the right (new) connection context and consumer info the subs * removeConsumer() may remove subcription even if it had different connection context (created by linkstealing) Patch attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (ACTIVEMQ6-42) Fix FORCED_DELIVERY_MESSAGE dependent on hornetq naming
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic closed ACTIVEMQ6-42. Resolution: Won't Fix Fix Version/s: (was: 6.0.0) Fix FORCED_DELIVERY_MESSAGE dependent on hornetq naming --- Key: ACTIVEMQ6-42 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-42 Project: Apache ActiveMQ 6 Issue Type: Sub-task Reporter: clebert suconic Assignee: Howard Gao Priority: Critical When dealing with receiveNoWait messages, core clients are sending a special message with a property called _hornetq.forced.delivery.seq We intend to keep the wire protocol compatible with old hornetq clients so we may have to do some smarts on ACTIVEMQ6-8 to support the proper name here (either _hornetq or _activemq). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (ACTIVEMQ6-42) Fix FORCED_DELIVERY_MESSAGE dependent on hornetq naming
[ https://issues.apache.org/jira/browse/ACTIVEMQ6-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] clebert suconic reopened ACTIVEMQ6-42: -- Fix FORCED_DELIVERY_MESSAGE dependent on hornetq naming --- Key: ACTIVEMQ6-42 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-42 Project: Apache ActiveMQ 6 Issue Type: Sub-task Reporter: clebert suconic Assignee: Howard Gao Priority: Critical When dealing with receiveNoWait messages, core clients are sending a special message with a property called _hornetq.forced.delivery.seq We intend to keep the wire protocol compatible with old hornetq clients so we may have to do some smarts on ACTIVEMQ6-8 to support the proper name here (either _hornetq or _activemq). -- This message was sent by Atlassian JIRA (v6.3.4#6332)