Non persistent messages can be sent to a durable sub, but the durable
sub will only get the messages if it connected, a backlog will not be
retained as the messages will not be stored
It will behave like a regular topic subscription in this regard.

On 14 December 2010 09:13, Aleksandar Ivanisevic
<aleksan...@ivanisevic.de> wrote:
>
> FWIW, I've nailed down my problem, it was a heavy I/O process running
> on the host node, sucking out all the I/O from the broker that was
> running in a (somewhat misconfigured) VM.
>
> That being said, is it possible to have a "diskless" broker and still
> have persistence?
>
> All my messages are persistent, but I'm not sure if they really need
> to be. I've made them persistent only because they are all going to
> durable topics, but docs are really moot about this, and I can't
> remember if I tested if non-persistent messages can be sent to durable
> topics.
>
> Reynald Borer <reynald.bo...@gmail.com>
> writes:
>
>>  Hi guys,
>>
>>
>> Have you tried to enable GC logs to see if maybe a full GC is
>> happening while everything is slowed down?
>>
>>
>> In the past I encountered the same issue where for an unknown reason
>> the GC activity was suddenly high, making a full GC of 0.2 seconds
>> every second. I spotted this issue because the GC logs where active
>> (and of course also because everything seemed to be slowed down).
>>
>>
>> This happened with the amq persistence adapter and not kahaDB (using
>> version 5.3.0). And now the problem is solved simply because the
>> number of connection has been lowered, which is not an ideal
>> solution. So I'm really interested if you find something useful.
>>
>>
>> Regards, Reynald
>>
>>
>> On Monday, December 13, 2010 at 15:58 ,
>> robert.sl...@misys.com wrote:
>>
>>>  I have tried each of the different cursor types with 5.4.1, but
>>> since downloading the 5.4.2 version, I have not changed the cursor
>>> used, so this is the default configuration, where no cursor is
>>> defined for queues - this should result in the default store cursor
>>> being used.
>>>
>>> I have just tried with enableJournalDiskSyncs = "false", but this
>>> results in similar slow processing after a batch of messages have
>>> been processed.  Although with this approach the consumer thread
>>> seemed to be more hindered than the producer, as messages were being
>>> produced onto the test queue around 3-4 times faster than they were
>>> being consumed. Once again I get the same EOFException thrown and
>>> message production/consumption seems to slow periodically, for
>>> greater periods the more messages I process.  -- View this message
>>> in context:
>>> http://activemq.2283324.n4.nabble.com/Slow-throughput-after-several-hundred-messages-tp3082431p3085521.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>>>
>>>
>>>
>>
>>
>
>



-- 
http://blog.garytully.com
http://fusesource.com

Reply via email to