RE: Qpid Dispatch Console

2020-02-12 Thread HADI Ali
Thanks Gordon ! It's a really nice console !

-Original Message-
From: Gordon Sim 
Sent: mardi 11 février 2020 14:42
To: users@qpid.apache.org
Subject: Re: Qpid Dispatch Console

On 11/02/2020 1:28 pm, HADI Ali wrote:
> Hello,
>
> I'm using the Dispatch router 1.9.0 and trying to launch the dispatch router 
> console as documented. I installed the required packages on a RHEL7 machine 
> but when adding an http listener, I'm having the following error in the logs 
> : SERVER (error) No HTTP support to listen on 0.0.0.0:5672.
>
> Any idea why this may be occurring?

It sounds like your router was built without http support. It needs the 
libwebsockets dev package (i.e. headers as well as lib) to be available at the 
time you run cmake.


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Qpid Dispatch Console

2020-02-11 Thread HADI Ali
Hello,

I'm using the Dispatch router 1.9.0 and trying to launch the dispatch router 
console as documented. I installed the required packages on a RHEL7 machine but 
when adding an http listener, I'm having the following error in the logs : 
SERVER (error) No HTTP support to listen on 0.0.0.0:5672.

Any idea why this may be occurring?

Thanks,
Ali

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: [Proton-C] Discovery

2019-12-16 Thread HADI Ali
Hello,

- On our side delaying the creation of the connection until one URL is 
available implies reimplementing another retry strategy at startup. We 
understand that the use case doesn't fit the current connection model in the 
code so we will not handle it for the moment.

- In our case, when the connection fails, we are requesting from a discovery 
service the urls of the available Dispatch-Routers.  The urls are returned as a 
vector which may contain one or many urls.
So we should be able to update both the reconnect_url and the failover_urls.
But as stated in the documentation, updating both is an undefined behavior.
For example, in case the vector contains only one new URL, we need to update 
the reconnect_url and reset the the failover_urls.
But in reality when doing that, a default URL (localhost:5672) is added as 
failover_urls because the variable "current_url_" is set as the first failover 
url ( 
https://github.com/apache/qpid-proton/blob/b9ea52d0229f69e70aa5543bd5a1521fca71796e/cpp/src/connection_options.cpp#L126).

How do you think we can handle our case?

Thanks,
Ali

-Original Message-
From: Andrew Stitcher 
Sent: vendredi 13 décembre 2019 17:12
To: users@qpid.apache.org
Subject: Re: [Proton-C] Discovery

On Fri, 2019-12-13 at 14:20 +0000, HADI Ali wrote:
> Hello,
>
> Sorry for the late reply. Thank you for the dev, it allows us to
> discover new endpoints when trying to reconnect.
>
> However there is one use case that we covered in Java with QPID-JMS
> that we couldn’t handle in C++.
> If at the creation of the connection, no messaging server is yet
> available, we want to trigger a retry on the DiscoveryAgent until new
> endpoints are found.
>
> Do you think it is possible to trigger the retry strategy already
> implemented without explicitly passing a URL at the creation of the
> connection?

As the code currently stands I don't think this is possible. I'm not sure if it 
ever will be, as what triggers the reconnect logic is actually failing to 
connect, so if you have no url to try you can't fail! We could perhaps get 
around this somehow, but the case doesn't fit the current connnection model - 
can you perhaps delay creating the connection until you know at least one of 
the urls?

>
> Another note : even if we can discover new endpoints and update the
> failover URLs, the primary URL cannot be updated even if it’s not
> available anymore.

You can change the original url by setting the reconnect_url option.


Andrew


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: [Proton-C] Discovery

2019-12-16 Thread HADI Ali
Hello,

On our side delaying the creation of the connection until one is created 
implies reimplementing another retry strategy at startup. However we understand 
that the use case doesn't fit the current connection model and we will drop it 
for the moment.

-Original Message-
From: Andrew Stitcher 
Sent: vendredi 13 décembre 2019 17:12
To: users@qpid.apache.org
Subject: Re: [Proton-C] Discovery

On Fri, 2019-12-13 at 14:20 +, HADI Ali wrote:
> Hello,
>
> Sorry for the late reply. Thank you for the dev, it allows us to
> discover new endpoints when trying to reconnect.
>
> However there is one use case that we covered in Java with QPID-JMS
> that we couldn’t handle in C++.
> If at the creation of the connection, no messaging server is yet
> available, we want to trigger a retry on the DiscoveryAgent until new
> endpoints are found.
>
> Do you think it is possible to trigger the retry strategy already
> implemented without explicitly passing a URL at the creation of the
> connection?

As the code currently stands I don't think this is possible. I'm not sure if it 
ever will be, as what triggers the reconnect logic is actually failing to 
connect, so if you have no url to try you can't fail! We could perhaps get 
around this somehow, but the case doesn't fit the current connnection model - 
can you perhaps delay creating the connection until you know at least one of 
the urls?

>
> Another note : even if we can discover new endpoints and update the
> failover URLs, the primary URL cannot be updated even if it’s not
> available anymore.

You can change the original url by setting the reconnect_url option.


Andrew


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: [Proton-C] Discovery

2019-12-13 Thread HADI Ali
Hello,

Sorry for the late reply. Thank you for the dev, it allows us to discover new 
endpoints when trying to reconnect.

However there is one use case that we covered in Java with QPID-JMS that we 
couldn’t handle in C++.
If at the creation of the connection, no messaging server is yet available, we 
want to trigger a retry on the DiscoveryAgent until new endpoints are found.

Do you think it is possible to trigger the retry strategy already implemented 
without explicitly passing a URL at the creation of the connection?

Another note : even if we can discover new endpoints and update the failover 
URLs, the primary URL cannot be updated even if it’s not available anymore.

Regards,
Ali

-Original Message-
From: Jiri Daněk 
Sent: jeudi 26 septembre 2019 10:04
To: users 
Subject: Re: [Proton-C] Discovery

On Wed, Jun 5, 2019 at 12:13 PM Rabih M  wrote:

> Hello Alan,
>
> Will your pull request "reconnect_update" be released in the next
> proton release 0.29.0?
> We are waiting for this dev to implement some features form our side.
>
> We can help if needed...
>

Hi, the PRs on https://issues.apache.org/jira/browse/PROTON-2040 were merged in 
time for 0.29 release. Does the way it is implemented there suit your needs?
--
Mit freundlichen Grüßen / Kind regards
Jiri Daněk
***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: [Qpid-proton-cpp] Performance regression found in 0.29.0

2019-11-19 Thread HADI Ali
Hello,

After analysis we discovered that the regression is coming from 
PROTON-2075:
 [C++] Allow TLS to use system default trusted certificate.
In fact we noticed that the ssl_client_options and the ssl_server_options are 
not default constructed the same way and that the second 
one
 is calling 
pni_init_ssl_domain
 which is taking some time.

What we would like is to avoid initializing ssl when it’s disabled from the 
connection_options.
Does it sound reasonable for you? Should we create a Jira issue and propose a 
fix?

Thanks,
Ali & Rabih

From: Rabih M 
Sent: mercredi 13 novembre 2019 19:22
To: users@qpid.apache.org
Subject: [Qpid-proton-cpp] Performance regression found in 0.29.0

Hello,

We are upgrading in our code the proton version from 0.27.0 to 0.29.0.
While running our unit tests, we found a considerable performance regression.

We were able to reproduce the regression in a very simple use case.
Please find the code attached.

This test takes 1 ms in the version 0.27.0 and 0.28.0 but it takes 73 ms in 
0.29.0 .

Do you know what might be the cause?
We will try to investigate in parallel from our side, too.

Thanks,
Rabih & Ali
***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


ClearQueue return value

2019-09-30 Thread HADI Ali
Hello,

I'm using the Broker-J 7.1.3 REST API and I noticed that the /clearQueue is 
returning 0 even when the queue contains messages. Is it an expected behavior ?

Regards,
Ali
***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


Message tracing roadmap

2019-09-27 Thread HADI Ali
Hello,

We saw the new tracing feature in the latest QPID-JMS release and it's very 
useful for us to have a way to track messages in our messaging system.
So, we tested it with Jaeger 0.35.0 and we are interested in using it and 
knowing the roadmap of tracing in the QPID components.

Are you planning to add traces in the dispatch router and the broker? Because 
having a way to trace messages step by step inside our cluster can be very 
helpful.

Do you think it would be possible to add the spans of the JMS sender and 
receiver inside an existing trace in order to integrate the messaging spans 
within other traces?

Thank you,
Ali and Anouchka
***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: Uneven distribution of messages

2019-07-24 Thread HADI Ali
Hello Ted,

Like already discussed, with linkCapacity = 1 in the normal listener, messages 
are routed to idle consumers if any, or equally load balanced between busy 
consumers. This is already solving a big part of our problem.

However with this configuration, we are limiting all our cluster and in 
particular producer features. We might need to use asynchronous send in other 
use cases which will not be possible with linkCapacity equals to one.

Having different linkCapacity for incoming and outgoing links might solve the 
problem. Do you think it's a good practice to do that ? And do you think it is 
feasible to have that without having different entrypoints (listener) for each 
dispatch router?

Best Regards,
Ali

-Original Message-
From: Rabih M 
Sent: lundi 22 juillet 2019 17:15
To: users@qpid.apache.org
Subject: Re: Uneven distribution of messages

Hello Ted,

We cannot use the link-routes because we need to have multiple consumers 
connected to the same queue.

Having multiple queues depending on the task execution time can be a good 
solution.
We are thinking about another solution too:
To have consumer that receives all the messages and dispatches them to the free 
workers and at the same time it can monitor the load to upscale and downscale 
the workers depending on the work received...

Thanks for your help,
Best regards,
Rabih

On Fri, Jul 19, 2019 at 6:02 PM Ted Ross  wrote:

> On Fri, Jul 19, 2019 at 7:50 AM Rabih M  wrote:
>
> > Hello,
> >
> > Yes this is one of our production use cases, where we have low
> > throughput but the processing of the messages can take from seconds
> > to minutes (financial calculations). This is why it was bothering us
> > to have an idle consumer while another is overloaded.
> >
>
> You should consider using link-routes for your consumers.  Link-routes
> trade off the load balancing for tight control of credit end-to-end.
> If a consumer is link-routed to a broker, the broker won't deliver a
> message on that link until the consumer issues a credit,  and then it
> will only delivery as many messages as credits were issued.
>
>
> >
> > I understand that for the performance, we need to prefetch messages
> > from broker to make them available in the dispatch, but what i did
> > not understand is why inside a router we need to assign the
> > prefetched
> messages
> > to a consumer connection and not waiting until the connected
> > consumer issues a credit, knowing that from a performance point of
> > view, the
> costly
> > action of "prefetching the messages through IO calls" was made.
> > Is it because the complexity of the routing algorithm and the
> > communications between the dispatch routers will increase?
> >
> >
> When messages are sent into the router network, they are immediately
> routed to a destination.  The routers don't hold messages for later routing.
> Also, the synchronization of real-time credit state across a network
> for all addresses is not practical or scalable.
>
>
> > Last point, we did the following test:
> > A dispatch router have a connector with a LinkCapacity=250 connected
> > to a broker and a listener with a LinkCapacity=1 connected to a consumer.
> >
>
> The important link capacity in this scenario is the 250 as it controls
> the router pre-fetch.  The consumer's link capacity of 1 is not
> relevant to this case.
>
>
> >
> > [image: Untitled Diagram.jpg]
> >
>
> I can't see the diagram, but I think I get the idea.
>
>
> >
> >
> > 1- the router 1 prefetches 250 message from the broker
> > 2- the consumer issues a credit
> > 3- the consumer receives a message from the router but does not
> acknowledge
> > 4- the consumer issues another credit
> > 5- the consumer receives a message from the router but does not
> > acknowledge again Steps 4 and 5 can be repeated until the 250 msgs
> > are all transferred to the consumer
> >
>
> This is consistent with there being only one consumer for the address on
> the network at the time the broker sent the 250 messages.
>
>
> >
> > Is this an expected behavior? consumer 1 should not acknowledge before he
> > can receive another message knowing that the link capacity of the
> listener
> > is 1?
> >
>
> Best practice for acknowledgement is for the consumer to acknowledge
> (settle) immediately after finishing the processing of the message (i.e.
> once that message is no longer consuming memory or compute resources on the
> host).  This causes the settlement state of deliveries to be directly
> related to consumer resources.  Again, the link capacity of 1 is not having
> any effect on the behavior of this scenario.
>
>
> >
> > Thanks for your explanations and help,
> >
>
> Am I to understand that your case is this?  You have a distributed work
> queue in which the time-to-process is highly variable.  Some messages are
> processed quickly and others take much longer.  You don't want to incur the
> longer latency on messages that can be handled quickly if there are many
> more fast messages 

RE: [Proton-C] Discovery

2019-05-03 Thread HADI Ali
Thanks a lot. This is exactly what we need for our custom discovery logic.
We are also thinking of updating the maxReconnectAttempts in the 
messaging_handler::on_connection_open in order to have the equivalent of the 
startupMaxReconnectAttempts in JMS. Do you think this will be feasible with 
your dev?

Thanks,
Ali

From: Alan Conway 
Sent: jeudi 2 mai 2019 21:29
To: users@qpid.apache.org
Subject: Re: [Proton-C] Discovery



On Thu, May 2, 2019 at 7:13 AM HADI Ali 
mailto:ali.h...@murex.com>> wrote:
Hello Alan,

I am using Proton 0.27.0 and I am not finding a way to update the reconnect 
options.
How can I use the connection.options().reconnect(reconnect_opts_)) you proposed 
or something equivalent in order to update the reconnect options after the 
connection is created ?

I apologise - the ability to update the connection options was never merged. 
I've submitted a pull request for review since I'm not working full time on 
proton at the moment. https://github.com/apache/qpid-proton/pull/181
With that change in place you will be able to do what you want, it should be in 
the next release if there are no objections.  See the attached example. The 
relevant part of the example is:

void on_transport_error(proton::transport & t) OVERRIDE {
std::cout << "disconnected by: " << t.error() << std::endl;
static int n = 0;
// Use the initial failover list the first 10 times, then switch to a 
new one.
if (n++ == 10) {
std::cout << "switching failover-list" << std::endl;
proton::connection_options co;
proton::reconnect_options ro;
ro.failover_urls({"badX","badY"});
co.reconnect(ro);
t.connection().reconnect_update(co); // Apply new options to 
connection
}
if (n > 20) { exit(0); } // Give up after 20 reconnects
}




Regards,
Ali

-Original Message-
From: Alan Conway mailto:acon...@redhat.com>>
Sent: mardi 30 avril 2019 21:11
To: users@qpid.apache.org<mailto:users@qpid.apache.org>
Subject: Re: [Proton-C] Discovery

On Tue, Apr 30, 2019 at 8:25 AM HADI Ali 
mailto:ali.h...@murex.com>> wrote:

> Hello Alan,
>
> Do you have any updates concerning the proposition to update the
> reconnect options in Proton-C?
> Is it planned and if yes do you have an idea when?
>

The changes I describe below were released version 0.26, and are available in 
the current release 0.27. You should be able to take advantage of them now.


> Regards,
> Ali
>
> -Original Message-
> From: Alan Conway mailto:acon...@redhat.com>>
> Sent: jeudi 24 janvier 2019 23:24
> To: users@qpid.apache.org<mailto:users@qpid.apache.org>
> Subject: Re: [Proton-C] Discovery
>
> On Thu, Jan 24, 2019 at 8:28 AM Rabih M 
> mailto:rabih.prom...@gmail.com>> wrote:
>
> > Hello,
> >
> > Knowing that the on_transport_error will be called only when the
> > max_reconnect is reached, the user will have to manage one reattempt
> > at a time. It will become too flexible,  the user will have to write
> > his own reconnect strategy with out reusing what was done already
> > the embedded reconnect code.
> >
>
> That used to be the case, but now on_transport_error() is now supposed
> to be called every time there is a transport error, exactly to support
> this kind of use case. I can't remember if that change made it into
> 0.26 or if it's just on master now.
>
> We would like to reuse the native reconnect way that is implemented in
> > proton and be flexible in the URLs like Qpid JMS and Qpid Python.
> >
>
> If on_transport_error() is called ever disconnect, then I think my
> proposal gives you that. There are syntactic differences - the
> callback is on_transport_error(), and instead of returning URLs you
> update the reconnect options - but the functionality is the same. Does
> that sound right?
>
>
> > Best regards,
> > Rabih
> >
> >
> > On Fri, Jan 18, 2019 at 4:58 PM Alan Conway 
> > mailto:acon...@redhat.com>> wrote:
> >
> > > On Fri, Jan 18, 2019 at 10:35 AM Alan Conway 
> > > mailto:acon...@redhat.com>>
> wrote:
> > >
> > > >
> > > >
> > > > On Thu, Jan 17, 2019 at 6:56 AM Rabih M
> > > > mailto:rabih.prom...@gmail.com>>
> > wrote:
> > > >
> > > >> Hello,
> > > >>
> > > >> What Olivier was proposing is more at the level of the C++
> > > >> proton
> > > binding.
> > > >> What we would like to do is:
> > > >> Instead of taking a vector of fixed fail-over urls in the
> > > >&g

RE: [Proton-C] Discovery

2019-05-02 Thread HADI Ali
Hello Alan,

I am using Proton 0.27.0 and I am not finding a way to update the reconnect 
options.
How can I use the connection.options().reconnect(reconnect_opts_)) you proposed 
or something equivalent in order to update the reconnect options after the 
connection is created ?

Regards,
Ali

-Original Message-
From: Alan Conway 
Sent: mardi 30 avril 2019 21:11
To: users@qpid.apache.org
Subject: Re: [Proton-C] Discovery

On Tue, Apr 30, 2019 at 8:25 AM HADI Ali  wrote:

> Hello Alan,
>
> Do you have any updates concerning the proposition to update the
> reconnect options in Proton-C?
> Is it planned and if yes do you have an idea when?
>

The changes I describe below were released version 0.26, and are available in 
the current release 0.27. You should be able to take advantage of them now.


> Regards,
> Ali
>
> -Original Message-
> From: Alan Conway 
> Sent: jeudi 24 janvier 2019 23:24
> To: users@qpid.apache.org
> Subject: Re: [Proton-C] Discovery
>
> On Thu, Jan 24, 2019 at 8:28 AM Rabih M  wrote:
>
> > Hello,
> >
> > Knowing that the on_transport_error will be called only when the
> > max_reconnect is reached, the user will have to manage one reattempt
> > at a time. It will become too flexible,  the user will have to write
> > his own reconnect strategy with out reusing what was done already
> > the embedded reconnect code.
> >
>
> That used to be the case, but now on_transport_error() is now supposed
> to be called every time there is a transport error, exactly to support
> this kind of use case. I can't remember if that change made it into
> 0.26 or if it's just on master now.
>
> We would like to reuse the native reconnect way that is implemented in
> > proton and be flexible in the URLs like Qpid JMS and Qpid Python.
> >
>
> If on_transport_error() is called ever disconnect, then I think my
> proposal gives you that. There are syntactic differences - the
> callback is on_transport_error(), and instead of returning URLs you
> update the reconnect options - but the functionality is the same. Does
> that sound right?
>
>
> > Best regards,
> > Rabih
> >
> >
> > On Fri, Jan 18, 2019 at 4:58 PM Alan Conway  wrote:
> >
> > > On Fri, Jan 18, 2019 at 10:35 AM Alan Conway 
> wrote:
> > >
> > > >
> > > >
> > > > On Thu, Jan 17, 2019 at 6:56 AM Rabih M
> > > > 
> > wrote:
> > > >
> > > >> Hello,
> > > >>
> > > >> What Olivier was proposing is more at the level of the C++
> > > >> proton
> > > binding.
> > > >> What we would like to do is:
> > > >> Instead of taking a vector of fixed fail-over urls in the
> > > >> reconnect_options, we would like the reconnect_options to take
> > > >> an std::function that returns a URL. This function will be
> > > >> called by
> > proton
> > > >> to
> > > >> get the next URL when there is failure. This will allow the
> > > >> clients to write there own logic to fetch the new URLs
> dynamically...
> > > >> On the qpid-jms side we have already this possibility.
> > > >>
> > > >
> > > > That sounds reasonable but I'd suggest an alternative that is a
> > > > bit
> > more
> > > > flexible, add this to proton::connection:
> > > >
> > > > // Over-ride connection options to be used the next time the
> > > > connection re-connects.
> > > > // Takes effect only if/when the connection does re-connect.
> > > > // Typically called in on_transport_error() to influence
> > > > automatic re-connect.
> > > > connection::reconnect_options(connection::options& overrides)
> > > >
> > > >
> > > BROKEN sorry - that would *replace* all your connection options,
> > > not override the ones you want which is not what I meant.
> > > This is better:
> > >
> > > // Allow updates to the connection_options used by this connection.
> > > // These updates only take effect if/when the connection is
> re-connected.
> > > // Typically used in on_transport_error() to change the options
> > > used for automatic re-connect.
> > > connection_options& connection::options();
> > >
> > > So now your case becomes:
> > >
> > > myhandler {
> > > connection_options::reconnect_opts_; // Save initial reconnect
> > > opts
> > >
> > > void on_transport_error(transport& t)

RE: [Proton-C] Discovery

2019-04-30 Thread HADI Ali
Hello Alan,

Do you have any updates concerning the proposition to update the reconnect 
options in Proton-C?
Is it planned and if yes do you have an idea when?

Regards,
Ali

-Original Message-
From: Alan Conway 
Sent: jeudi 24 janvier 2019 23:24
To: users@qpid.apache.org
Subject: Re: [Proton-C] Discovery

On Thu, Jan 24, 2019 at 8:28 AM Rabih M  wrote:

> Hello,
>
> Knowing that the on_transport_error will be called only when the
> max_reconnect is reached, the user will have to manage one reattempt
> at a time. It will become too flexible,  the user will have to write
> his own reconnect strategy with out reusing what was done already the
> embedded reconnect code.
>

That used to be the case, but now on_transport_error() is now supposed to be 
called every time there is a transport error, exactly to support this kind of 
use case. I can't remember if that change made it into 0.26 or if it's just on 
master now.

We would like to reuse the native reconnect way that is implemented in
> proton and be flexible in the URLs like Qpid JMS and Qpid Python.
>

If on_transport_error() is called ever disconnect, then I think my proposal 
gives you that. There are syntactic differences - the callback is 
on_transport_error(), and instead of returning URLs you update the reconnect 
options - but the functionality is the same. Does that sound right?


> Best regards,
> Rabih
>
>
> On Fri, Jan 18, 2019 at 4:58 PM Alan Conway  wrote:
>
> > On Fri, Jan 18, 2019 at 10:35 AM Alan Conway  wrote:
> >
> > >
> > >
> > > On Thu, Jan 17, 2019 at 6:56 AM Rabih M 
> wrote:
> > >
> > >> Hello,
> > >>
> > >> What Olivier was proposing is more at the level of the C++ proton
> > binding.
> > >> What we would like to do is:
> > >> Instead of taking a vector of fixed fail-over urls in the
> > >> reconnect_options, we would like the reconnect_options to take an
> > >> std::function that returns a URL. This function will be called by
> proton
> > >> to
> > >> get the next URL when there is failure. This will allow the
> > >> clients to write there own logic to fetch the new URLs dynamically...
> > >> On the qpid-jms side we have already this possibility.
> > >>
> > >
> > > That sounds reasonable but I'd suggest an alternative that is a
> > > bit
> more
> > > flexible, add this to proton::connection:
> > >
> > > // Over-ride connection options to be used the next time the
> > > connection re-connects.
> > > // Takes effect only if/when the connection does re-connect.
> > > // Typically called in on_transport_error() to influence automatic
> > > re-connect.
> > > connection::reconnect_options(connection::options& overrides)
> > >
> > >
> > BROKEN sorry - that would *replace* all your connection options, not
> > override the ones you want which is not what I meant.
> > This is better:
> >
> > // Allow updates to the connection_options used by this connection.
> > // These updates only take effect if/when the connection is re-connected.
> > // Typically used in on_transport_error() to change the options used
> > for automatic re-connect.
> > connection_options& connection::options();
> >
> > So now your case becomes:
> >
> > myhandler {
> > connection_options::reconnect_opts_; // Save initial reconnect
> > opts
> >
> > void on_transport_error(transport& t) {
> >   reconnect_opts_.failover_urls(pick_my_urls()); // Update the URLs
> >   t .connection().options().reconnect(reconnect_opts_)); //
> > Update
> the
> > connection's options
> > }
> > }
> >
> > >
> > >
> > >> We would like to know if it sounds reasonable to you before
> > >> proposing
> a
> > >> patch. WDYT?
> > >>
> > >> Best regards,
> > >> Rabih
> > >>
> > >> On Thu, Jan 3, 2019 at 9:15 PM Alan Conway 
> wrote:
> > >>
> > >> > On Thu, Jan 3, 2019 at 7:12 AM Gordon Sim  wrote:
> > >> >
> > >> > > Are you talking specifically about something at the c level
> > >> > > rather
> > >> than
> > >> > > e.g. c++?
> > >> > >
> > >> > > As far as I recall, the c layer has no built in support for
> > >> > > reconnection, that is added by the c++ (or other) wrappers.
> > >> > >
> > >> > > In the c++ api, perhaps the reconnect options in use could be
> > exposed
> > >> > > (such that they can then be altered), or else there could be
> > >> > > a way
> > to
> > >> > > provide a function that returns the next url to use rather
> > >> > > than a
> > >> static
> > >> > > list (this is sort of what the python wrapper allows). That
> > >> > > may be
> > >> what
> > >> > > you mean by the onReconnect callback? If so, it sounds
> > >> > > reasonable
> to
> > >> me,
> > >> > > though it would be better to get the thoughts of those more
> involved
> > >> > > with that component. (Alan, Cliff, Andrew?)
> > >> > >
> > >> > >
> > >> > Just to add some detail to what Gordon said - in C there is no
> > reconnect
> > >> > support out-of-the-box  but you have the tools to implement any
> > strategy
> > >> > you like. Use the PN_TRANSPORT_CLOSED event (with
> pn_transport_error()
> 

RE: [QPID JMS] receive with timeout and reconnect

2019-04-26 Thread HADI Ali
Hello Robbie,

To make a parallel with all other actions, a timeout (sendTimeout or 
requestTimeout) is always added at the level of the FailoverRequest so that the 
action won't necessarily wait the end of the retry to fail, hence an exception 
can be thrown. In our case this makes a big difference since our retry can take 
up to 2 hours and the user may want to be notified that there is an error 
server side while trying to reconnect.
However we agree that the receive action is much more complex. When the user is 
not specifying a timeout, the actual behavior makes perfect sense since the 
user doesn’t want to continue unless he gets a message. However when specifying 
a timeout and after this timeout is reached, we can consider being in the same 
configuration as the others actions. Maybe we can expose the drainTimeout and 
add in the FailoverRequest for the pull action the sum of the actual 
receiveTimeout + drainTimeout. Indeed the drainTimeout represents how long we 
want to wait after the receiveTimeout before considering the server as not 
responding.

Regards,
Ali

-Original Message-
From: Robbie Gemmell 
Sent: lundi 1 avril 2019 14:13
To: users@qpid.apache.org
Subject: Re: [QPID JMS] receive with timeout and reconnect

I wouldn't consider it an internal error that you have expressly configured the 
client to sit reconnecting and it is currently doing so, such that there is 
obviously no message available in the time it is doing that but also no final 
connection failure since you have configured it not to fail the connection 
without hours of retry. As result I would not choose to throw an exception from 
receive() in that scenario. Theres also not much you can reasonably do with an 
exception there either except try the consumer again regardless, or close the 
consumer immediately and then do something like try creating a new one (which 
is going to run into the reconnections again), or perhaps cleaning up entirely 
and creating a new connection (which your reconnect config suggests you dont 
want to do) getting largely back to square one. Since exceptions would be 
impl-only in this case, using provider-specific exception handling to discern 
detail and adjust behaviour would be tying your code non portably to that JMS 
provider impl, not something I'd recommend.

Robbie

On Wed, 27 Mar 2019 at 14:56, VERMEULEN Olivier  
wrote:
>
> Hello Robbie,
>
> The behavior we expect from the receive with timeout actually comes from our 
> understanding of the JMS documentation (that may be wrong).
>
> " This call blocks until a message arrives, the timeout expires, or this 
> message consumer is closed. "
> That's why we are expecting the receive to return (with a null value or an 
> exception, that's the second question) after the receive timeout and not 
> after the whole reconnect process which is set to 2 hours in our case.
>
> Now regarding what should be returned/thrown, the doc says:
> " Throws: JMSException - if the JMS provider fails to receive the next 
> message due to some internal error."
> Isn't it the case here, we actually failed to receive the next message, no?
>
> Thanks,
> Olivier
>
> -Original Message-
> From: Robbie Gemmell 
> Sent: mardi 12 mars 2019 15:04
> To: users@qpid.apache.org
> Subject: Re: [QPID JMS] receive with timeout and reconnect
>
> I wouldn't use an exception if the server was known to be down and the client 
> has been explicitly configured to keep reconnecting. If the reconnections are 
> ultimately exhausted, then an exception is appropriate.
>
> If the connection is considered live and the server doesnt respond, then an 
> exception is also reasonable. The JMS client does that and closes the 
> consumer entirely, since if a drain attempt doesnt complete (with messages or 
> without) you then cant reason about future attempts properly.
>
> Robbie
>
> On Mon, 11 Mar 2019 at 16:33, HADI Ali  wrote:
> >
> > Hello,
> >
> > We agree that the send and receive operations are different because in the 
> > case of the receive we can reach the timeout even if the sender is 
> > responding correctly ( no message available ).
> >
> > We already faced this issue in the implementation of the receive function 
> > in our synchronous C++ API and we solved it by separating the two different 
> > receive cases :
> > - Receiving with no message available : The drain call will not hang 
> > meaning the server is responding and no messages are available so we can 
> > return NULL to the receiver.
> > - Receiving while the server is hanging : The drain call will hang because 
> > the server is not responding meaning we should return an exception ( same 
> > as the send timeout ).
> >
> > What do you think of this approach?
> 

RE: [QPID JMS] receive with timeout and reconnect

2019-03-19 Thread HADI Ali
Hello,

Any updates on this issue? Do you think it is feasible to return a NULL message 
at the end of the timeout?
If yes, should we create a Jira ticket corresponding to this issue?

Regards,
Ali

-Original Message-
From: HADI Ali
Sent: mercredi 13 mars 2019 11:54
To: users@qpid.apache.org
Subject: RE: [QPID JMS] receive with timeout and reconnect

In our C++ API we chose to return an exception if the server doesn’t respond 
because the client needs to differentiate between receiving with no message 
available and server not responding (or down) because he will react differently.
Anyway, the most important thing for us is to respect the timeout limit set by 
the client on the receive operation, independently of the reconnect parameters.
Whether it will return an exception or a NULL message is an acceptable solution 
for us.

Regards,
Ali

-Original Message-
From: Robbie Gemmell 
Sent: mardi 12 mars 2019 15:04
To: users@qpid.apache.org
Subject: Re: [QPID JMS] receive with timeout and reconnect

I wouldn't use an exception if the server was known to be down and the client 
has been explicitly configured to keep reconnecting. If the reconnections are 
ultimately exhausted, then an exception is appropriate.

If the connection is considered live and the server doesnt respond, then an 
exception is also reasonable. The JMS client does that and closes the consumer 
entirely, since if a drain attempt doesnt complete (with messages or without) 
you then cant reason about future attempts properly.

Robbie

On Mon, 11 Mar 2019 at 16:33, HADI Ali  wrote:
>
> Hello,
>
> We agree that the send and receive operations are different because in the 
> case of the receive we can reach the timeout even if the sender is responding 
> correctly ( no message available ).
>
> We already faced this issue in the implementation of the receive function in 
> our synchronous C++ API and we solved it by separating the two different 
> receive cases :
> - Receiving with no message available : The drain call will not hang meaning 
> the server is responding and no messages are available so we can return NULL 
> to the receiver.
> - Receiving while the server is hanging : The drain call will hang because 
> the server is not responding meaning we should return an exception ( same as 
> the send timeout ).
>
> What do you think of this approach?
>
> Regards,
> Ali
>
> -Original Message-
> From: Robbie Gemmell 
> Sent: lundi 11 mars 2019 15:51
> To: users@qpid.apache.org
> Subject: Re: [QPID JMS] receive with timeout and reconnect
>
> Correct, its not possible for local-only operation while disabling 
> prefetching, as thats necessarily requiring remote-only.
>
> While the cases are related I'd also consider the two situations somwhat 
> different, in that the sender already has a message to process while the 
> receiver does not and may not for the entire time you have configured things 
> to consider the connection not failed.
>
> As I said, I probably wouldnt throw in this case but it could perhaps return 
> null over and over again, though doing so would be tricky as there are a 
> bunch more mechanics in the receiver and disabling prefetch further 
> complicates matters as youve partly seen.
>
> Robbie
>
> On Fri, 8 Mar 2019 at 14:02, HADI Ali  wrote:
> >
> > Hello,
> >
> > According to the documentation the jms.receiveLocalOnly URI option will 
> > make the receive calls with a timeout will only check a consumers local 
> > message buffer. This option will solve the receive with timeout issue only 
> > if we are prefetching messages.
> >
> > In our case, our prefetch policy is set to zero. Can’t the receive with 
> > timeout behave the same way the send timeout does? As a client of the JMS 
> > API, I would expect the receive with timeout to exit in the worst case when 
> > the timeout expires no matter what is happening in the background ( 
> > Reconnect option triggered ).
> >
> > Regards,
> > Ali
> >
> > -Original Message-
> > From: Robbie Gemmell 
> > Sent: mardi 26 février 2019 17:27
> > To: users@qpid.apache.org
> > Subject: Re: [QPID JMS] receive with timeout and reconnect
> >
> > I guess it is probably blocking on beginning an attempt to drain the link 
> > credit as way to verify no messages before returning null.
> > Setting the jms.receiveLocalOnly URI option true would stop it draining the 
> > link and so I guess let it return null instead of waiting for the failover 
> > process to complete.
> >
> > I dont think I'd ever choose to throw from the consumer there, 
> > alternatively it could just return null repeatedly since thats what it does 
> > otherwise when there 

RE: Dispatch Router prefetch

2019-03-19 Thread HADI Ali
Hello,

Concerning handling the TTL at the level of the Dispatch Router, should we open 
a Jira ticket to track the issue and continue the discussion ?
Depending on the priority of this issue on both sides, we are open to 
contribute if needed.

Regards,
Ali

-Original Message-
From: HADI Ali
Sent: mercredi 13 mars 2019 11:14
To: users@qpid.apache.org
Subject: RE: Dispatch Router prefetch

We support both, it depends on the use case (we have multiple services using 
the messaging).

-Original Message-
From: Robbie Gemmell 
Sent: mardi 12 mars 2019 15:15
To: users@qpid.apache.org
Subject: Re: Dispatch Router prefetch

What acknowledgement mode mode are you using?

On Tue, 12 Mar 2019 at 13:22, HADI Ali  wrote:
>
> Hello,
>
> In our use case we have polling consumers with a prefetch policy of zero that 
> issues one credit at a time every few seconds. Between two receive, the 
> consumer will be attached with zero credit.
> Thus, not considering a consumer to be a routable destination until it issues 
> initial credit would address the problem only for the first message, because 
> the dispatch will still prefetch possibly expired messages as soon as the 
> destination is considered routable.
>
> In this use case we are consuming a few messages per minutes and TTLs are 
> between 2 to 5 seconds. Concerning the granularity, one second should be 
> sufficient for us.
>
> We also noticed that the broker is not forwarding the TTL set at the level of 
> the queue. Is this an expected behavior?
>
> Thanks,
> Ali
>
> -Original Message-
> From: Ted Ross 
> Sent: lundi 11 mars 2019 15:32
> To: users@qpid.apache.org
> Subject: Re: Dispatch Router prefetch
>
> On Fri, Mar 8, 2019 at 9:19 AM Gordon Sim  wrote:
>
> > On 08/03/2019 2:12 pm, Gordon Sim wrote:
> > > On 08/03/2019 12:59 pm, HADI Ali wrote:
> > >> Hello,
> > >>
> > >> We are actually using in our cluster multiple brokers and thus we
> > >> need to define the same address on multiple brokers.
> > >> For this, we cannot use linkroutes as suggested, but we still
> > >> need to have the correct behavior of the TTL in our cluster.
> > >>
> > >> Is it an option to manage the TTL of the message at the level of
> > >> the dispatch router since we have all of the information needed
> > >> in the message headers?
> > >
> > > It doesn't do that at present, but it doesn't seem like an
> > > reasonable enhancement to me.
> >
> > Sorry, meant to say it doesn't seem like an *un*reasonable enhancement!
> >
>
> I'd like to better understand the use case here.  We've avoided adding any 
> kind of TTL support in Dispatch Router up to this point.
>
> I assume, based on the fact that prefetch-1 didn't solve your problem, that 
> you have consumers that are attached but don't issue credit for long periods 
> of time.  Is this accurate?
>
> What is the pattern of your consumers?  Do they attach, then later issue 
> credit to process a message?  How many messages per second/minute/hour do 
> your consumers handle?  Do they issue one credit at a time?
>
> What are the typical TTLs in your messages?  How granular does the expiration 
> need to be (i.e. how accurate of a timer would need to be used to tag each 
> incoming delivery)?  Would one-second granularity be sufficient, or do you 
> need milliseconds?
>
> An alternate approach would be to not consider a consumer to be a routable 
> destination until it issues initial credit.  Would this address your problem?
>
>
> >
> > >> In Internet Protocol, ipv4 for example, the routers manage the
> > >> TTL and discard any expired messages.
> > >>
> > >> Or make it feasible to have the autolinks propagate the credit
> > >> directly from consumers?
> > >
> > > This isn't really possible when you have autolinks for same
> > > address to multiple brokers. If the consumer gives 10 credits, how
> > > do you propagate that to two brokers?  5 each? What if they don't both 
> > > have 5 messages?
> > > 10 each? Then you are back to the situation where you have more
> > > credit issued at source than the consumer has granted.
> > >
> > > --
> > > --
> > > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > > additional commands, e-mail: users-h...@qpid.apache.org
> > >
> >
> >
> > 
> > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For

RE: [QPID JMS] receive with timeout and reconnect

2019-03-13 Thread HADI Ali
In our C++ API we chose to return an exception if the server doesn’t respond 
because the client needs to differentiate between receiving with no message 
available and server not responding (or down) because he will react differently.
Anyway, the most important thing for us is to respect the timeout limit set by 
the client on the receive operation, independently of the reconnect parameters.
Whether it will return an exception or a NULL message is an acceptable solution 
for us.

Regards,
Ali

-Original Message-
From: Robbie Gemmell 
Sent: mardi 12 mars 2019 15:04
To: users@qpid.apache.org
Subject: Re: [QPID JMS] receive with timeout and reconnect

I wouldn't use an exception if the server was known to be down and the client 
has been explicitly configured to keep reconnecting. If the reconnections are 
ultimately exhausted, then an exception is appropriate.

If the connection is considered live and the server doesnt respond, then an 
exception is also reasonable. The JMS client does that and closes the consumer 
entirely, since if a drain attempt doesnt complete (with messages or without) 
you then cant reason about future attempts properly.

Robbie

On Mon, 11 Mar 2019 at 16:33, HADI Ali  wrote:
>
> Hello,
>
> We agree that the send and receive operations are different because in the 
> case of the receive we can reach the timeout even if the sender is responding 
> correctly ( no message available ).
>
> We already faced this issue in the implementation of the receive function in 
> our synchronous C++ API and we solved it by separating the two different 
> receive cases :
> - Receiving with no message available : The drain call will not hang meaning 
> the server is responding and no messages are available so we can return NULL 
> to the receiver.
> - Receiving while the server is hanging : The drain call will hang because 
> the server is not responding meaning we should return an exception ( same as 
> the send timeout ).
>
> What do you think of this approach?
>
> Regards,
> Ali
>
> -Original Message-
> From: Robbie Gemmell 
> Sent: lundi 11 mars 2019 15:51
> To: users@qpid.apache.org
> Subject: Re: [QPID JMS] receive with timeout and reconnect
>
> Correct, its not possible for local-only operation while disabling 
> prefetching, as thats necessarily requiring remote-only.
>
> While the cases are related I'd also consider the two situations somwhat 
> different, in that the sender already has a message to process while the 
> receiver does not and may not for the entire time you have configured things 
> to consider the connection not failed.
>
> As I said, I probably wouldnt throw in this case but it could perhaps return 
> null over and over again, though doing so would be tricky as there are a 
> bunch more mechanics in the receiver and disabling prefetch further 
> complicates matters as youve partly seen.
>
> Robbie
>
> On Fri, 8 Mar 2019 at 14:02, HADI Ali  wrote:
> >
> > Hello,
> >
> > According to the documentation the jms.receiveLocalOnly URI option will 
> > make the receive calls with a timeout will only check a consumers local 
> > message buffer. This option will solve the receive with timeout issue only 
> > if we are prefetching messages.
> >
> > In our case, our prefetch policy is set to zero. Can’t the receive with 
> > timeout behave the same way the send timeout does? As a client of the JMS 
> > API, I would expect the receive with timeout to exit in the worst case when 
> > the timeout expires no matter what is happening in the background ( 
> > Reconnect option triggered ).
> >
> > Regards,
> > Ali
> >
> > -Original Message-
> > From: Robbie Gemmell 
> > Sent: mardi 26 février 2019 17:27
> > To: users@qpid.apache.org
> > Subject: Re: [QPID JMS] receive with timeout and reconnect
> >
> > I guess it is probably blocking on beginning an attempt to drain the link 
> > credit as way to verify no messages before returning null.
> > Setting the jms.receiveLocalOnly URI option true would stop it draining the 
> > link and so I guess let it return null instead of waiting for the failover 
> > process to complete.
> >
> > I dont think I'd ever choose to throw from the consumer there, 
> > alternatively it could just return null repeatedly since thats what it does 
> > otherwise when there arent messages it can give.
> >
> > Robbie
> >
> > On Mon, 25 Feb 2019 at 10:16, VERMEULEN Olivier 
> >  wrote:
> > >
> > > Hello,
> > >
> > > We're using QPID JMS 0.39.0 with a set of reconnect options that makes 
> > > the client retry to connect for 2 hours in case of problem.
> &g

RE: Dispatch Router prefetch

2019-03-13 Thread HADI Ali
We support both, it depends on the use case (we have multiple services using 
the messaging).

-Original Message-
From: Robbie Gemmell 
Sent: mardi 12 mars 2019 15:15
To: users@qpid.apache.org
Subject: Re: Dispatch Router prefetch

What acknowledgement mode mode are you using?

On Tue, 12 Mar 2019 at 13:22, HADI Ali  wrote:
>
> Hello,
>
> In our use case we have polling consumers with a prefetch policy of zero that 
> issues one credit at a time every few seconds. Between two receive, the 
> consumer will be attached with zero credit.
> Thus, not considering a consumer to be a routable destination until it issues 
> initial credit would address the problem only for the first message, because 
> the dispatch will still prefetch possibly expired messages as soon as the 
> destination is considered routable.
>
> In this use case we are consuming a few messages per minutes and TTLs are 
> between 2 to 5 seconds. Concerning the granularity, one second should be 
> sufficient for us.
>
> We also noticed that the broker is not forwarding the TTL set at the level of 
> the queue. Is this an expected behavior?
>
> Thanks,
> Ali
>
> -Original Message-
> From: Ted Ross 
> Sent: lundi 11 mars 2019 15:32
> To: users@qpid.apache.org
> Subject: Re: Dispatch Router prefetch
>
> On Fri, Mar 8, 2019 at 9:19 AM Gordon Sim  wrote:
>
> > On 08/03/2019 2:12 pm, Gordon Sim wrote:
> > > On 08/03/2019 12:59 pm, HADI Ali wrote:
> > >> Hello,
> > >>
> > >> We are actually using in our cluster multiple brokers and thus we
> > >> need to define the same address on multiple brokers.
> > >> For this, we cannot use linkroutes as suggested, but we still
> > >> need to have the correct behavior of the TTL in our cluster.
> > >>
> > >> Is it an option to manage the TTL of the message at the level of
> > >> the dispatch router since we have all of the information needed
> > >> in the message headers?
> > >
> > > It doesn't do that at present, but it doesn't seem like an
> > > reasonable enhancement to me.
> >
> > Sorry, meant to say it doesn't seem like an *un*reasonable enhancement!
> >
>
> I'd like to better understand the use case here.  We've avoided adding any 
> kind of TTL support in Dispatch Router up to this point.
>
> I assume, based on the fact that prefetch-1 didn't solve your problem, that 
> you have consumers that are attached but don't issue credit for long periods 
> of time.  Is this accurate?
>
> What is the pattern of your consumers?  Do they attach, then later issue 
> credit to process a message?  How many messages per second/minute/hour do 
> your consumers handle?  Do they issue one credit at a time?
>
> What are the typical TTLs in your messages?  How granular does the expiration 
> need to be (i.e. how accurate of a timer would need to be used to tag each 
> incoming delivery)?  Would one-second granularity be sufficient, or do you 
> need milliseconds?
>
> An alternate approach would be to not consider a consumer to be a routable 
> destination until it issues initial credit.  Would this address your problem?
>
>
> >
> > >> In Internet Protocol, ipv4 for example, the routers manage the
> > >> TTL and discard any expired messages.
> > >>
> > >> Or make it feasible to have the autolinks propagate the credit
> > >> directly from consumers?
> > >
> > > This isn't really possible when you have autolinks for same
> > > address to multiple brokers. If the consumer gives 10 credits, how
> > > do you propagate that to two brokers?  5 each? What if they don't both 
> > > have 5 messages?
> > > 10 each? Then you are back to the situation where you have more
> > > credit issued at source than the consumer has granted.
> > >
> > > --
> > > --
> > > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > > additional commands, e-mail: users-h...@qpid.apache.org
> > >
> >
> >
> > 
> > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
> ***
> This e-mail contains information for the intended recipient only. It may 
> contain proprietary material or confidential information. If you are not the 
> intended recipient you are not author

RE: Dispatch Router prefetch

2019-03-12 Thread HADI Ali
Hello,

In our use case we have polling consumers with a prefetch policy of zero that 
issues one credit at a time every few seconds. Between two receive, the 
consumer will be attached with zero credit.
Thus, not considering a consumer to be a routable destination until it issues 
initial credit would address the problem only for the first message, because 
the dispatch will still prefetch possibly expired messages as soon as the 
destination is considered routable.

In this use case we are consuming a few messages per minutes and TTLs are 
between 2 to 5 seconds. Concerning the granularity, one second should be 
sufficient for us.

We also noticed that the broker is not forwarding the TTL set at the level of 
the queue. Is this an expected behavior?

Thanks,
Ali

-Original Message-
From: Ted Ross 
Sent: lundi 11 mars 2019 15:32
To: users@qpid.apache.org
Subject: Re: Dispatch Router prefetch

On Fri, Mar 8, 2019 at 9:19 AM Gordon Sim  wrote:

> On 08/03/2019 2:12 pm, Gordon Sim wrote:
> > On 08/03/2019 12:59 pm, HADI Ali wrote:
> >> Hello,
> >>
> >> We are actually using in our cluster multiple brokers and thus we
> >> need to define the same address on multiple brokers.
> >> For this, we cannot use linkroutes as suggested, but we still need
> >> to have the correct behavior of the TTL in our cluster.
> >>
> >> Is it an option to manage the TTL of the message at the level of
> >> the dispatch router since we have all of the information needed in
> >> the message headers?
> >
> > It doesn't do that at present, but it doesn't seem like an
> > reasonable enhancement to me.
>
> Sorry, meant to say it doesn't seem like an *un*reasonable enhancement!
>

I'd like to better understand the use case here.  We've avoided adding any kind 
of TTL support in Dispatch Router up to this point.

I assume, based on the fact that prefetch-1 didn't solve your problem, that you 
have consumers that are attached but don't issue credit for long periods of 
time.  Is this accurate?

What is the pattern of your consumers?  Do they attach, then later issue credit 
to process a message?  How many messages per second/minute/hour do your 
consumers handle?  Do they issue one credit at a time?

What are the typical TTLs in your messages?  How granular does the expiration 
need to be (i.e. how accurate of a timer would need to be used to tag each 
incoming delivery)?  Would one-second granularity be sufficient, or do you need 
milliseconds?

An alternate approach would be to not consider a consumer to be a routable 
destination until it issues initial credit.  Would this address your problem?


>
> >> In Internet Protocol, ipv4 for example, the routers manage the TTL
> >> and discard any expired messages.
> >>
> >> Or make it feasible to have the autolinks propagate the credit
> >> directly from consumers?
> >
> > This isn't really possible when you have autolinks for same address
> > to multiple brokers. If the consumer gives 10 credits, how do you
> > propagate that to two brokers?  5 each? What if they don't both have 5 
> > messages?
> > 10 each? Then you are back to the situation where you have more
> > credit issued at source than the consumer has granted.
> >
> > 
> > - To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > additional commands, e-mail: users-h...@qpid.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> additional commands, e-mail: users-h...@qpid.apache.org
>
>
***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



RE: [QPID JMS] receive with timeout and reconnect

2019-03-11 Thread HADI Ali
Hello,

We agree that the send and receive operations are different because in the case 
of the receive we can reach the timeout even if the sender is responding 
correctly ( no message available ).

We already faced this issue in the implementation of the receive function in 
our synchronous C++ API and we solved it by separating the two different 
receive cases :
- Receiving with no message available : The drain call will not hang meaning 
the server is responding and no messages are available so we can return NULL to 
the receiver.
- Receiving while the server is hanging : The drain call will hang because the 
server is not responding meaning we should return an exception ( same as the 
send timeout ).

What do you think of this approach?

Regards,
Ali

-Original Message-
From: Robbie Gemmell 
Sent: lundi 11 mars 2019 15:51
To: users@qpid.apache.org
Subject: Re: [QPID JMS] receive with timeout and reconnect

Correct, its not possible for local-only operation while disabling prefetching, 
as thats necessarily requiring remote-only.

While the cases are related I'd also consider the two situations somwhat 
different, in that the sender already has a message to process while the 
receiver does not and may not for the entire time you have configured things to 
consider the connection not failed.

As I said, I probably wouldnt throw in this case but it could perhaps return 
null over and over again, though doing so would be tricky as there are a bunch 
more mechanics in the receiver and disabling prefetch further complicates 
matters as youve partly seen.

Robbie

On Fri, 8 Mar 2019 at 14:02, HADI Ali  wrote:
>
> Hello,
>
> According to the documentation the jms.receiveLocalOnly URI option will make 
> the receive calls with a timeout will only check a consumers local message 
> buffer. This option will solve the receive with timeout issue only if we are 
> prefetching messages.
>
> In our case, our prefetch policy is set to zero. Can’t the receive with 
> timeout behave the same way the send timeout does? As a client of the JMS 
> API, I would expect the receive with timeout to exit in the worst case when 
> the timeout expires no matter what is happening in the background ( Reconnect 
> option triggered ).
>
> Regards,
> Ali
>
> -Original Message-
> From: Robbie Gemmell 
> Sent: mardi 26 février 2019 17:27
> To: users@qpid.apache.org
> Subject: Re: [QPID JMS] receive with timeout and reconnect
>
> I guess it is probably blocking on beginning an attempt to drain the link 
> credit as way to verify no messages before returning null.
> Setting the jms.receiveLocalOnly URI option true would stop it draining the 
> link and so I guess let it return null instead of waiting for the failover 
> process to complete.
>
> I dont think I'd ever choose to throw from the consumer there, alternatively 
> it could just return null repeatedly since thats what it does otherwise when 
> there arent messages it can give.
>
> Robbie
>
> On Mon, 25 Feb 2019 at 10:16, VERMEULEN Olivier  
> wrote:
> >
> > Hello,
> >
> > We're using QPID JMS 0.39.0 with a set of reconnect options that makes the 
> > client retry to connect for 2 hours in case of problem.
> > When doing a synchronous receive call with a smaller timeout (like 60 
> > seconds) we were expecting to receive a TimeOutException after 60 seconds 
> > but we actually have to wait for the whole reconnect to end, so 2 hours.
> > Is that expected? We were expecting a behavior similar to the one we have 
> > with the sendTimeout (defined at the level of the connection factory) where 
> > the send fails but the reconnect continues behind the scene.
> >
> > Thanks,
> > Olivier
> >
> > ***
> > This e-mail contains information for the intended recipient only. It may 
> > contain proprietary material or confidential information. If you are not 
> > the intended recipient you are not authorized to distribute, copy or use 
> > this e-mail or any attachment to it. Murex cannot guarantee that it is 
> > virus free and accepts no responsibility for any loss or damage arising 
> > from its use. If you have received this e-mail in error please notify 
> > immediately the sender and delete the original email received, any 
> > attachments and all copies from your system.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> additional commands, e-mail: users-h...@qpid.apache.org
>
> ***
> This e-mail contains information for the intended recipient only. It may 
> contain proprietary material or confidential information. If you are not the 
> intended

RE: [QPID JMS] receive with timeout and reconnect

2019-03-08 Thread HADI Ali
Hello,

According to the documentation the jms.receiveLocalOnly URI option will make 
the receive calls with a timeout will only check a consumers local message 
buffer. This option will solve the receive with timeout issue only if we are 
prefetching messages.

In our case, our prefetch policy is set to zero. Can’t the receive with timeout 
behave the same way the send timeout does? As a client of the JMS API, I would 
expect the receive with timeout to exit in the worst case when the timeout 
expires no matter what is happening in the background ( Reconnect option 
triggered ).

Regards,
Ali

-Original Message-
From: Robbie Gemmell 
Sent: mardi 26 février 2019 17:27
To: users@qpid.apache.org
Subject: Re: [QPID JMS] receive with timeout and reconnect

I guess it is probably blocking on beginning an attempt to drain the link 
credit as way to verify no messages before returning null.
Setting the jms.receiveLocalOnly URI option true would stop it draining the 
link and so I guess let it return null instead of waiting for the failover 
process to complete.

I dont think I'd ever choose to throw from the consumer there, alternatively it 
could just return null repeatedly since thats what it does otherwise when there 
arent messages it can give.

Robbie

On Mon, 25 Feb 2019 at 10:16, VERMEULEN Olivier  
wrote:
>
> Hello,
>
> We're using QPID JMS 0.39.0 with a set of reconnect options that makes the 
> client retry to connect for 2 hours in case of problem.
> When doing a synchronous receive call with a smaller timeout (like 60 
> seconds) we were expecting to receive a TimeOutException after 60 seconds but 
> we actually have to wait for the whole reconnect to end, so 2 hours.
> Is that expected? We were expecting a behavior similar to the one we have 
> with the sendTimeout (defined at the level of the connection factory) where 
> the send fails but the reconnect continues behind the scene.
>
> Thanks,
> Olivier
>
> ***
> This e-mail contains information for the intended recipient only. It may 
> contain proprietary material or confidential information. If you are not the 
> intended recipient you are not authorized to distribute, copy or use this 
> e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
> and accepts no responsibility for any loss or damage arising from its use. If 
> you have received this e-mail in error please notify immediately the sender 
> and delete the original email received, any attachments and all copies from 
> your system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: Dispatch Router prefetch

2019-03-08 Thread HADI Ali
Hello,

We are actually using in our cluster multiple brokers and thus we need to 
define the same address on multiple brokers.
For this, we cannot use linkroutes as suggested, but we still need to have the 
correct behavior of the TTL in our cluster.

Is it an option to manage the TTL of the message at the level of the dispatch 
router since we have all of the information needed in the message headers?
In Internet Protocol, ipv4 for example, the routers manage the TTL and discard 
any expired messages.

Or make it feasible to have the autolinks propagate the credit directly from 
consumers? Therefore the dispatch router will only transit messages and the 
broker will handle the lifecycle of the message.

Thank you,
Ali

-Original Message-
From: Gordon Sim 
Sent: mercredi 6 mars 2019 16:25
To: users@qpid.apache.org
Subject: Re: Dispatch Router prefetch

On 06/03/2019 1:20 pm, ali hadi wrote:
> We are using a cluster with one Dispatch Router version 1.5.0 and one
> Broker-J version 7.1.0 on which we created a queue with a TTL of 5 seconds .
>
>
> We are noticing that the dispatch is prefetching messages from the
> broker ( messages become in acquired state ) as soon as the consumer
> establishes a connection with the dispatch and before starting
> consuming. This is causing the messages to not be discarded after the
> TTL expires and the consumer receiving the expired message.
>
>
> We tried changing the linkCapacity on the level of the connector to 1
> which allowed us to prefetch one message only instead of the default 250.
>
>
> We are trying to find a way to remove completely the prefetch of the
> dispatch in order to have the correct behavior from the TTL with our
> cluster.
>
> Are there any flags or properties to be set in order for the dispatch
> to only fetch a message on consumer demand?

When using autolinks, the autolink for messages from the broker to the router 
will at present be activate as soon as there is an active receiver for the 
messages, whether or not that receiver has credit. In message routing the 
credit is not directly propagated from client to broker.

If you use a link route then credit would be propagated directly, i.e.
only when your client issues credit will the link between broker and router get 
credit (and it will be the exact same amount).


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.


RE: Drain issue in proton C++

2019-01-21 Thread HADI Ali
Hello,

Do you have an idea when Proton 0.27 is planned to be released ?
We are particularly interested in the drain issue.

Thanks,
Ali

-Original Message-
From: Gordon Sim 
Sent: lundi 14 janvier 2019 13:02
To: users@qpid.apache.org
Subject: Re: Drain issue in proton C++

On 14/01/19 10:31, HADI Ali wrote:
> Hello,
>
> We tested the scenario with Proton 0.26 and we saw the same behavior. However 
> it is fixed with your patch as you can see in the log attached.
>
> You can raise a JIRA for the issue. Will the fix be integrated in the next 
> release ?

https://issues.apache.org/jira/browse/PROTON-1990, yes I hope/expect it will be.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



RE: Drain issue in proton C++

2019-01-14 Thread HADI Ali
Hello,

We tested the scenario with Proton 0.26 and we saw the same behavior. However 
it is fixed with your patch as you can see in the log attached.

You can raise a JIRA for the issue. Will the fix be integrated in the next 
release ?

Regards,
Ali

-Original Message-
From: Gordon Sim 
Sent: mercredi 9 janvier 2019 15:35
To: users@qpid.apache.org
Subject: Re: Drain issue in proton C++

On 09/01/19 10:28, ali hadi wrote:
> We are currently facing some issues with the drain function in Proton
> C++ 0.22.0 .
>
> We are trying to do synchronous fetch with timeout by doing a drain
> after the timeout expires.
>
> However after the first drain request is done, the drain flag is not
> reset to false causing all the following actions to also request a
> drain from the sender.

Do you see the same behaviour against the latest release (0.26)? If so I would 
raise a JIRA for this behaviour, it sounds like a bug.

> We couldn’t find a way to change the drain flag in the
> on_receiver_drain_finish function. Should I create a jira issue ?

I would expect that you would not need to directly change the flag, but that 
once you get the on_receiver_drain_finish function the drain 'cycle' is 
complete and the flag is no longer set.

Indeed in messaging_adapter.cpp the draining flag on the receiver's context 
does appear to be reset:

 if (!pn_link_credit(lnk) && lctx.draining) {
 lctx.draining = false;
 receiver r(make_wrapper(lnk));
 handler.on_receiver_drain_finish(r);
 }

I *think* it may be missing a pn_link_set_drain() as per attached patch.

(Also, shouldn't that be pn_link_remote_credit() in the test? or are they 
guaranteed to be the same at this point?)

***
This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorized to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.
ReceiverClient on_container_start
[012D57A3A0E0]:  -> AMQP
[012D57A3A0E0]:0 -> @open(16) 
[container-id="292348be-1884-47e1-8a6c-3dd62cae7252", hostname="localhost", 
channel-max=32767]
[012D57A3A0E0]:0 -> @begin(17) [next-outgoing-id=0, 
incoming-window=2147483647, outgoing-window=2147483647]
[012D57A3A0E0]:0 -> @attach(18) 
[name="905f4916-f16d-415a-bb41-e926eb01b30b", handle=0, role=true, 
snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="myQueue", 
durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0, 
dynamic=false], initial-delivery-count=0, max-message-size=0]
[012D57A3A0E0]:  <- AMQP
[012D57A3A0E0]:0 <- @open(16) 
[container-id="a51c297e-c548-42df-a42c-a3e8e0ae2356", max-frame-size=262144, 
channel-max=255, idle-time-out=0, 
offered-capabilities=@PN_SYMBOL[:"ANONYMOUS-RELAY", :"SHARED-SUBS", 
:"sole-connection-for-container"], properties={:product="qpid", 
:version="7.0.3", :"qpid.build"="62d7ae692b47746c42949c2e646f33a966e6a1a1", 
:"qpid.instance_name"="Broker", 
:"qpid.virtualhost_properties_supported"="true"}]
[012D57A3A0E0]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=0, 
incoming-window=8192, outgoing-window=2048]
[012D57A3A0E0]:0 <- @attach(18) 
[name="905f4916-f16d-415a-bb41-e926eb01b30b", handle=0, role=false, 
snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [address="myQueue", 
durable=0, dynamic=false, default-outcome=@modified(39) [delivery-failed=true], 
outcomes=@PN_SYMBOL[:"amqp:accepted:list", :"amqp:released:list", 
:"amqp:rejected:list"], capabilities=@PN_SYMBOL[:queue]], target=@target(41) 
[durable=0, timeout=0, dynamic=false], unsettled={}, initial-delivery-count=0, 
offered-capabilities=@PN_SYMBOL[:"SHARED-SUBS"], properties={}]
ReceiverClient on_connection_open
ReceiverClient on_session_open
ReceiverClient on_receiver_open
first receive
[012D57A3A0E0]:0 -> @flow(19) [next-incoming-id=0, 
incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
handle=0, delivery-count=0, link-credit=1, drain=false]
drain
[012D57A3A0E0]:0 -> @flow(19) [next-incoming-id=0, 
incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
handle=0, delivery-count=0, link-credit=1, drain=true]
[012D57A3A0E0]:0 <- @flow(19) [next-incoming-id=0, incoming-window=8192, 
next-outgoing-id=0, outgoing-window=2048, handle=0, delivery-count=1, 
link-credit=0, available=0, drain=true, echo=false]
ReceiverClient on_receiver_drain_finish
second receive
[012D57A3A0E0]:0 -> @flow(19) [next-incoming-id=0, 
incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647, 
handle=0, delivery-count=1, 

RE: Send timeout in proton C++

2018-09-28 Thread HADI Ali
The action I want to take is to stop the send. Is there a way other than 
stopping the container to do this ? For example, during the receive, we are 
using the drain function after the timeout to stop receiving.

Thanks,
Ali

-Original Message-
From: Gordon Sim  
Sent: vendredi 28 septembre 2018 13:00
To: users@qpid.apache.org
Subject: Re: Send timeout in proton C++

On 28/09/18 10:49, ali hadi wrote:
> Hello,
> 
> 
> 
> Our messaging topology uses a dispatch-router in front of many Java brokers.
> 
> In the case where all our brokers are down, we want to throw an 
> exception to the producer after a timeout. This is not possible with 
> the idle-timeout parameter since the producer is connected to the 
> dispatch router which is still responding.
> 
> Is there an equivalent to the JMS send timeout parameter or a way to 
> not let the producer hanging forever in proton C++?

No, but you can implement a timeout like that using the schedule() method on 
the container. On sending a message, schedule a check on the return tracker 
after the appropriate timeout. If when that fire the delivery the tracker 
refers to has not been settled, you can take whatever action needed to handle 
the timeout.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

***

This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorised to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org