Hmm, in fact it doesn't seem to always be the case. I am running activemq as a server and a producer. Only the producer is running and the producer sends GB of data to activemq:
val msg = "Hello world!" * 100 val start = System.currentTimeMillis (0 to 30000000).foreach { i => if (i % 1000 == 0) { println(i) publisher.commit() } publisher.send(publisher.createTextMessage(msg + i)) } kahadb is already 10GB and activemq server runs with Xmx1g. The server in fact uses about 200MB on average (after I do a manual GC) . The producer is never blocked! But my original post is for a vm: protocol, so is it an embeded activemq issue? This is the configuration of my current test where I get 10GB of kahadb stored. I didn't configure PFC but all work fine: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd"> <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="/mnt/data/activemq-data/"> <managementContext> <managementContext createConnector="true"/> </managementContext> <persistenceAdapter> <kahaDB directory="/mnt/data/activemq-data/kahadb"/> </persistenceAdapter> <transportConnectors> <transportConnector name="openwire" uri="tcp://0.0.0.0:60001?maximumConnections=1000&wireformat.maxFrameSize=104857600"/> </transportConnectors> </broker> </beans> -- View this message in context: http://activemq.2283324.n4.nabble.com/activemq-deadlocks-when-publisher-tries-to-commit-tp4671888p4671949.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.