I don’t , i was forwarding on something raised in the user mail list, just 
sounded important. Probably worth replying back to them in the user mail list 

Sent from my iPhone

> On 7 Mar 2018, at 19:36, Clebert Suconic <clebert.suco...@gmail.com> wrote:
> 
> If you need really large messages.. can  you use the core protocol clients?
> 
> On Wed, Mar 7, 2018 at 8:30 AM, Clebert Suconic
> <clebert.suco...@gmail.com> wrote:
>> I don’t think so.  We don’t stream large messages in AMQP yet. We just
>> convert them as a single chunk. It should work as long as you have memory.
>> 
>> Netty uses native buffers.  So there is a chance this could crash when OME.
>> 
>> 
>> The other issue I saw on users list seems a blocker and I will be looking on
>> that.
>> 
>> On Wed, Mar 7, 2018 at 12:01 AM Michael André Pearce
>> <michael.andre.pea...@me.com> wrote:
>>> 
>>> @Clebert
>>> 
>>> Could this be a blocker for release?
>>> Just sending it to the dev mail list incase it got missed in the users.
>>> 
>>> Begin forwarded message:
>>> 
>>> From: andi welchlin <andi.welch...@gmail.com>
>>> Date: 6 March 2018 at 09:08:39 GMT
>>> To: us...@activemq.apache.org
>>> Subject: ActiveMq Artemis crashes when large amqp messages need are
>>> delivered
>>> Reply-To: us...@activemq.apache.org
>>> 
>>> Hello,
>>> 
>>> I tested Artemis as a stanalone broker (the current snapshot 2.5.0).
>>> 
>>> One program sent a 100MB amqp message to a queue in the broker. This could
>>> be done.
>>> 
>>> But while the message was delivered to a subscriber Artemis crashed.
>>> 
>>> The queue was defined like this:
>>> 
>>>  <queues>
>>>        <queue name="awe.test.queue">
>>>          <address>awe.test.queue</address>
>>>          <durable>true</durable>
>>>        </queue>
>>>      </queues>
>>> 
>>> And I set global size to 1000 MB:
>>> 
>>>      <global-max-size>1000Mb</global-max-size>
>>> 
>>> 
>>> 
>>> When I restarted the broker and connected the subscriber it crashed,
>>> again.
>>> 
>>> Since the log output is large I attached a file to this mail. The end of
>>> the output looks like this:
>>> 
>>>        at java.lang.Object.wait(Native Method)
>>>        -  waiting on java.lang.ref.ReferenceQueue$Lock@773bc0c1
>>>        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
>>>        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164)
>>>        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
>>> 
>>> 
>>> "Reference Handler" Id=2 WAITING on java.lang.ref.Reference$Lock@3e627566
>>>        at java.lang.Object.wait(Native Method)
>>>        -  waiting on java.lang.ref.Reference$Lock@3e627566
>>>        at java.lang.Object.wait(Object.java:502)
>>>        at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
>>>        at
>>> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
>>> 
>>> 
>>> 
>>> ===============================================================================
>>> End Thread dump
>>> 
>>> *******************************************************************************
>>> 
>>> 
>>> I tested the same setup with a QPID C++ broker. It had no problems even
>>> with much larger messages.
>>> 
>>> Kind Regards,
>>> Andreas
>>> 
>>> 
>>> 
>>> 
>> --
>> Clebert Suconic
> 
> 
> 
> -- 
> Clebert Suconic

Reply via email to