Thank Robbie and Gordon for the suggestion, when I debug further, I observed 
that Consuming application was blocked even though the messages were there in 
the queue it was unable to receive the messages, when I restart the consuming 
application it would run for some time and once again go into block state, 
which caused the queue size to grow and to be full, which caused this issue. 
Probably I will test  with Broker J++ or active MQ.

-----Original Message-----
From: Robbie Gemmell [mailto:robbie.gemm...@gmail.com] 
Sent: Monday, May 24, 2021 4:13 PM
To: users <users@qpid.apache.org>
Subject: Re: Message Persistence in Qpid Broker C++ (1.39)

The SendSettled option would seem likely to suggest pre-settling messages 
before sending, i.e giving an unreliable send without acknowledgement. This 
isnt typical if you are wanting to persist the messages, which more usually 
implies a desire for reliability, so I would wonder about your option selection 
there.

The error suggests your queue built up enough stored messages (204400) to 
exceed its configured size limit (104857200), and so the broker detached the 
sending link with an error (behaviour potentially changed with the other policy 
choices). Your application would need to reestablish a sender to continue in 
that case.

I dont really know the C++ brokers persistence, but from a search it seems the 
'flow-to-disk' policy was replaced with a distinct paging feature. Gordon wrote 
this in 
https://lists.apache.org/thread.html/4199af433994c51357d8f9af3c887aa5451cc931fdfae310bd95b238@<users.qpid.apache.org>
:

It is no longer implemented in that form and with that name. There is a 
'paging' mechanism that is considered a replacement.

Paging is enabled for a queue by setting "qpid.paging" to true. It can be tuned 
via "qpid.max_pages_loaded", which determines how many pages can be kept in 
memory and "qpid.page_factor" which controls the page size as a multiple of the 
platform default page size).

Note that this mechanism is entirely orthogonal to persistence."


On Mon, 24 May 2021 at 08:05, Namitha, Nancy <nami...@hpe.com> wrote:
>
> Hi Gordon/Rob,
>         I tried with asynchronous approach by setting the LinkSourceDurablity 
> to MixedMode (2) and LinkSenderSettle  to MixedMode (2) and also setting the 
> Message Property  SendSettled to true.  A sender sending messages and 
> receiver sending messages.  After sometime the sending application errors 
> with error "Maximum Depth Exceeded".
> Also the consuming application hangs after it errors out on Sender. I 
> have restart the receiver process to consume rest of the messages. Is 
> there anyother configuration that I am missing
>
> Also when creating the queue,  I cannot set the limit policy to 
> flow-to-disk
>
> Sending message:link detached, reason: *Error{Condition: 
> amqp:precondition-failed, Description: resource-limit-exceeded: 
> Maximum depth exceeded on odimevent: current=[count: 204400, size: 
> 104857200], max=[size: 104857600] 
> (/build/qpid-cpp-VLEObV/qpid-cpp-1.39.0/src/qpid/broker/Queue.cpp:1658
> ), Info: map[]} exit status 1
>
>
> And if I try to config limit policy as to flow-to-disk,  I get invalid 
> choice. Do I need configure anything else to enable that configuration.
>
> qpid-config: error: option --limit-policy: invalid choice: 
> 'flow-to-disk' (choose from 'none', 'reject', 'ring', 'ring-strict')
>
> Thanks & Regards,
> Nancy
>
> -----Original Message-----
> From: Rob Godfrey [mailto:rob.j.godf...@gmail.com]
> Sent: Wednesday, May 19, 2021 8:55 PM
> To: users@qpid.apache.org
> Subject: Re: Message Persistence in Qpid Broker C++ (1.39)
>
> On Wed, 19 May 2021 at 17:04, Gordon Sim <g...@redhat.com> wrote:
>
> > On Wed, May 19, 2021 at 3:34 PM Namitha, Nancy <nami...@hpe.com> wrote:
> > > Is Broker - J optimized for the below mentioned scenario.
> >
> > I believe the java broker will indeed perform better for synchronous 
> > publish. Best thing is to run a quick test though.
> >
>
> Broker-J does not rely on a flush timer, instead simply combining all DB 
> interactions that are running concurrently into a single write... So, 
> theoretically, for this particular case you shouldn't see the same sort of 
> latency... but as Gordon says you should really test with something that best 
> represents your actual expected workload.  In general if you care about 
> performance then sending individual messages synchronously is not a good 
> choice.
>
> -- Rob
>
> >
> >
> > --------------------------------------------------------------------
> > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For 
> > additional commands, e-mail: users-h...@qpid.apache.org
> >
> >

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

Reply via email to