[Proton Cpp] container.stop called in a separate thread

2020-02-14 Thread Rabih M
Hello,

I am using proton cpp 0.27.1.

While working on the C++ imperative API, I encountered a problem while
using container.stop.
Please find the test code attached (a simplified version of the current
implementation).
I attached the core I am getting too.

The problem:
In the example I attached, the container.stop in the ImperativeContainer
destructor might have a race condition with the closing of the connection.
Is everything related to this connection is freed when on_transport_close
is called? because in the on_transport_close I signal for the main thread
to continue and execute container.stop which might create a race condition
if some actions are taken after on_transport_close is called...

A probable solution:
I think we need a sure way to be able to release a handler:
It might be an unregister handler function or a call back after
on_transport_close that tells us that we can release the memory of the
handler.

Best regards,
Rabih


struct ContainerHandler : public proton::messaging_handler {
   void on_container_start(proton::container&) override {
  std::cout << "client on_container_start" << std::endl;
  m_onOpenPromise.set_value();
   }

   void on_container_stop(proton::container&) override {
  std::cout << "client on_container_stop" << std::endl;
  m_onStopPromise.set_value();
   }

   std::promise m_onOpenPromise;
   std::promise m_onStopPromise;
};

struct ImperativeContainer {
   ImperativeContainer()
   : m_container(m_containerHand)
   {
  auto openContainerFuture = m_containerHand.m_onOpenPromise.get_future();
  m_thread = std::thread([&]() {
 try {
m_container.auto_stop(false);
m_container.run();
 }
 catch (const std::exception& e) {
std::cout << "std::exception caught on pn container, message:" << 
e.what() << std::endl;
 }});
  openContainerFuture.get();
   }

   ~ImperativeContainer() {
  auto closeContainerFuture = m_containerHand.m_onStopPromise.get_future();
  m_container.stop();
  closeContainerFuture.get();
  if (m_thread.joinable()) {
 m_thread.join();
  }
   }

   ContainerHandler m_containerHand;
   proton::container m_container;
   std::thread m_thread;
};

struct ConnectionHandler : public proton::messaging_handler {
   void on_connection_open(proton::connection& conn) override {
  std::cout << "client on_connection_open" << std::endl;
  m_connection = conn;
  m_work = _connection.work_queue();
  m_onOpenPromise.set_value();
   }

   void on_transport_close(proton::transport&) override {
  std::cout << "client on_transport_close" << std::endl;
  //to avoid race condition in free
  m_connection = proton::connection();
  m_onClosePromise.set_value();
  std::cout << "client on_transport_close finished" << std::endl;
   }

   void on_connection_close(proton::connection&) {
  std::cout << "client on_connection_close" << std::endl;
   }

   void on_connection_error(proton::connection& conn) {
  std::cout << "client on_connection_error: " << conn.error().what() << 
std::endl;
   }

   void on_transport_open(proton::transport&) {
  std::cout << "client on_transport_open" << std::endl;
   }

   void on_transport_error(proton::transport& t) {
  std::cout << "client on_transport_error: " << t.error().what() << 
std::endl;
   }

   proton::connection m_connection;
   proton::work_queue* m_work;

   std::promise m_onOpenPromise;
   std::promise m_onClosePromise;
};

class ImperativeConnection {
public:
   ImperativeConnection(proton::container& myContainer, const std::strind& url) 
{
  auto openConnectionFuture = m_connectionHand.m_onOpenPromise.get_future();
  myContainer.connect(url, m_connectionHand);
  openConnectionFuture.get();
   }

   ~ImperativeConnection() {
  auto closeConnectionFuture = 
m_connectionHand.m_onClosePromise.get_future();
  m_connectionHand.m_work->add([=]() 
{m_connectionHand.m_connection.close(); });
  closeConnectionFuture.wait();
   }

private:
   ConnectionHandler m_connectionHand;
};

int main(int argc, char **argv) {
std::string url = argc > 1 ? argv[1] : "//127.0.0.1:5672";
ImperativeContainer cont;
ImperativeConnection conn(cont.m_container, url);
}
#0  pn_error_free (error=0x)
at /data/home/proton/qpid-proton-0.27.1/c/src/core/error.c:50
#1  0x7f5439a301b7 in pn_data_finalize (object=0x1a22420)
at /data/home/proton/qpid-proton-0.27.1/c/src/core/codec.c:90
#2  0x7f5439a2d218 in pn_class_decref (clazz=0x7f5439c54880 , 
object=0x1a22420)
at /data/home/proton/qpid-proton-0.27.1/c/src/core/object/object.c:95
#3  0x7f5439a36f1d in pn_condition_tini (condition=0x1a24990)
at /data/home/proton/qpid-proton-0.27.1/c/src/core/engine.c:219
#4  0x7f5439a396a6 in pn_condition_free (c=0x1a24990)
at /data/home/proton/qpid-proton-0.27.1/c/src/core/engine.c:227
#5  0x7f5439c59728 in 

Re: An imperative AMQP client API

2020-01-28 Thread Rabih M
Hello,

I will share our "work in progress" for the C++ implementation even if it
is not ready yet: https://github.com/rabih-mourad/qpid-imperative-proton
There is plenty of refactoring to do and API alignment to make...

Best regards,
Rabih

On Mon, Jan 27, 2020 at 4:35 PM Rabih M  wrote:

> Hello,
>
> We (Murex) are very excited about this new imperative API project.
> We are using proton C++ on the client side for some years now and we
> experienced the difficulties this model has, as Justin described in his
> first mail.
> On our side, we began implementing the new API in the C++ language and we
> will share it with you as soon as we have something presentable.
>
> I would like to thank all the contributors for there efforts on this
> project.
>
> Best regards,
> Rabih
>
> On Mon, Jan 13, 2020 at 1:40 PM Justin Ross  wrote:
>
>> Hi, everyone.  For a while now, some members of the Qpid community have
>> been working on a new style of messaging API.  It now has a reasonable
>> shape, and we want to share it and get your feedback.
>>
>> We currently offer either JMS or Proton's reactive API.  These certainly
>> aren't going anywhere - they're important - but for some use cases, the
>> absence of a high-level command-oriented API for AMQP messaging is a
>> source
>> of inconvenience.
>>
>> This inconvenience comes in two forms.  First, JMS is helpfully imperative
>> (in part - it contains multitudes), but it doesn't expose some of the
>> things you can do with AMQP.  And it can't reasonably expose those things,
>> because that would break the contract of the JMS API.  Second, the Proton
>> APIs, since they are reactive, make it harder to handle cases where you
>> need to sequence the processing of events.
>>
>> The imperative client API we are talking about here uses modern language
>> support for futures or coroutines.  Most of the API's operation is
>> asynchronous, but you can easily introduce blocking where you need to.
>>
>> It's a client API only.  We think a comparably high-level server API would
>> be its own dedicated thing, functioning more like Python Flask or JAX-RS.
>> In any case, the Proton reactive API is already a good fit for writing
>> servers.
>>
>> We have the outline of the API and some proof-of-concept implementations,
>> but much remains to be done.  We anticipate that this work will ultimately
>> become a sub-module of Proton, such as proton/client.
>>
>> The API spec:
>> http://www.ssorj.net/pumpjack/client/index.html
>>
>> I've worked on the Python prototypes, so I'll share them here.  I did
>> these
>> some time ago, so they need updating for the latest API spec changes, but
>> they serve to show how the API can use futures or coroutines.  (For
>> Python,
>> coroutines are the preferred approach.)
>>
>> Python prototype (future-based):
>> https://github.com/ssorj/gambit/blob/futures/python/demo.py
>> https://github.com/ssorj/gambit/blob/futures/python/gambit.py
>>
>> Python prototype (coroutine-based):
>> https://github.com/ssorj/gambit/blob/asyncio2/python/demo.py
>> https://github.com/ssorj/gambit/blob/asyncio2/python/gambit.py
>>
>> Some of the other folks who have done exploratory work will follow up on
>> this thread to show what they've done.
>>
>> Some caveats: this is in early stages, and the API will change as we
>> discuss it more.  There are also big outstanding pieces to look at, such
>> as
>> reconnect and failover, to name just one.
>>
>> Thanks for your time, and please let us know what you think.
>>
>


Re: An imperative AMQP client API

2020-01-27 Thread Rabih M
Hello,

We (Murex) are very excited about this new imperative API project.
We are using proton C++ on the client side for some years now and we
experienced the difficulties this model has, as Justin described in his
first mail.
On our side, we began implementing the new API in the C++ language and we
will share it with you as soon as we have something presentable.

I would like to thank all the contributors for there efforts on this
project.

Best regards,
Rabih

On Mon, Jan 13, 2020 at 1:40 PM Justin Ross  wrote:

> Hi, everyone.  For a while now, some members of the Qpid community have
> been working on a new style of messaging API.  It now has a reasonable
> shape, and we want to share it and get your feedback.
>
> We currently offer either JMS or Proton's reactive API.  These certainly
> aren't going anywhere - they're important - but for some use cases, the
> absence of a high-level command-oriented API for AMQP messaging is a source
> of inconvenience.
>
> This inconvenience comes in two forms.  First, JMS is helpfully imperative
> (in part - it contains multitudes), but it doesn't expose some of the
> things you can do with AMQP.  And it can't reasonably expose those things,
> because that would break the contract of the JMS API.  Second, the Proton
> APIs, since they are reactive, make it harder to handle cases where you
> need to sequence the processing of events.
>
> The imperative client API we are talking about here uses modern language
> support for futures or coroutines.  Most of the API's operation is
> asynchronous, but you can easily introduce blocking where you need to.
>
> It's a client API only.  We think a comparably high-level server API would
> be its own dedicated thing, functioning more like Python Flask or JAX-RS.
> In any case, the Proton reactive API is already a good fit for writing
> servers.
>
> We have the outline of the API and some proof-of-concept implementations,
> but much remains to be done.  We anticipate that this work will ultimately
> become a sub-module of Proton, such as proton/client.
>
> The API spec:
> http://www.ssorj.net/pumpjack/client/index.html
>
> I've worked on the Python prototypes, so I'll share them here.  I did these
> some time ago, so they need updating for the latest API spec changes, but
> they serve to show how the API can use futures or coroutines.  (For Python,
> coroutines are the preferred approach.)
>
> Python prototype (future-based):
> https://github.com/ssorj/gambit/blob/futures/python/demo.py
> https://github.com/ssorj/gambit/blob/futures/python/gambit.py
>
> Python prototype (coroutine-based):
> https://github.com/ssorj/gambit/blob/asyncio2/python/demo.py
> https://github.com/ssorj/gambit/blob/asyncio2/python/gambit.py
>
> Some of the other folks who have done exploratory work will follow up on
> this thread to show what they've done.
>
> Some caveats: this is in early stages, and the API will change as we
> discuss it more.  There are also big outstanding pieces to look at, such as
> reconnect and failover, to name just one.
>
> Thanks for your time, and please let us know what you think.
>


Re: [Qpid-proton-cpp] Performance regression found in 0.29.0

2019-11-20 Thread Rabih M
Hi,

Jira issue created: PROTON-2137
<https://issues.apache.org/jira/browse/PROTON-2137>.

Best regards,
Rabih

On Wed, Nov 20, 2019 at 5:01 PM Andrew Stitcher 
wrote:

> Please raise a JIRA with all this information and the reproducer - it
> is very hard to track fixes without this.
>
> Also you could attach your patch, but it is easier in this case to use
> a github PR with a reference to the JIRA. The Apache automation will
> tie them together and this makes it again easier to track.
>
> Andrew
>
> On Tue, 2019-11-19 at 18:04 +, HADI Ali wrote:
> > Hello,
> >
> > After analysis we discovered that the regression is coming from
> > PROTON-2075<
> >
> https://github.com/apache/qpid-proton/commit/e152190459cd75792002d2aae72d351dc22abe27
> >
> > ;: [C++] Allow TLS to use system default trusted certificate.
> > In fact we noticed that the ssl_client_options and the
> > ssl_server_options are not default constructed the same way and that
> > the second one<
> >
> https://github.com/apache/qpid-proton/blob/e152190459cd75792002d2aae72d351dc22abe27/cpp/src/ssl_options.cpp#L99
> > > is calling pni_init_ssl_domain<
> >
> https://github.com/apache/qpid-proton/blob/9dd013335de0694bc52848897b17190f297450c1/c/src/ssl/openssl.c#L475
> > > which is taking some time.
> >
> > What we would like is to avoid initializing ssl when it’s disabled
> > from the connection_options.
> > Does it sound reasonable for you? Should we create a Jira issue and
> > propose a fix?
> >
> > Thanks,
> > Ali & Rabih
> >
> > From: Rabih M 
> > Sent: mercredi 13 novembre 2019 19:22
> > To: users@qpid.apache.org
> > Subject: [Qpid-proton-cpp] Performance regression found in 0.29.0
> >
> > Hello,
> >
> > We are upgrading in our code the proton version from 0.27.0 to
> > 0.29.0.
> > While running our unit tests, we found a considerable performance
> > regression.
> >
> > We were able to reproduce the regression in a very simple use case.
> > Please find the code attached.
> >
> > This test takes 1 ms in the version 0.27.0 and 0.28.0 but it takes 73
> > ms in 0.29.0 .
> >
> > Do you know what might be the cause?
> > We will try to investigate in parallel from our side, too.
> >
> > Thanks,
> > Rabih & Ali
> > ***
> > This e-mail contains information for the intended recipient only. It
> > may contain proprietary material or confidential information. If you
> > are not the intended recipient you are not authorized to distribute,
> > copy or use this e-mail or any attachment to it. Murex cannot
> > guarantee that it is virus free and accepts no responsibility for any
> > loss or damage arising from its use. If you have received this e-mail
> > in error please notify immediately the sender and delete the original
> > email received, any attachments and all copies from your system.
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: [Qpid-proton-cpp] Performance regression found in 0.29.0

2019-11-20 Thread Rabih M
Hello again,

Here is a patch that we propose. All unit tests are green on the master.

Can you please @Andrew Stitcher check if this patch is aligned with what
you meant to do in (PROTON-2075) ?

Best regards,
Rabih & Ali

On Tue, Nov 19, 2019 at 7:04 PM HADI Ali  wrote:

> Hello,
>
> After analysis we discovered that the regression is coming from
> PROTON-2075<
> https://github.com/apache/qpid-proton/commit/e152190459cd75792002d2aae72d351dc22abe27>:
> [C++] Allow TLS to use system default trusted certificate.
> In fact we noticed that the ssl_client_options and the ssl_server_options
> are not default constructed the same way and that the second one<
> https://github.com/apache/qpid-proton/blob/e152190459cd75792002d2aae72d351dc22abe27/cpp/src/ssl_options.cpp#L99>
> is calling pni_init_ssl_domain<
> https://github.com/apache/qpid-proton/blob/9dd013335de0694bc52848897b17190f297450c1/c/src/ssl/openssl.c#L475>
> which is taking some time.
>
> What we would like is to avoid initializing ssl when it’s disabled from
> the connection_options.
> Does it sound reasonable for you? Should we create a Jira issue and
> propose a fix?
>
> Thanks,
> Ali & Rabih
>
> From: Rabih M 
> Sent: mercredi 13 novembre 2019 19:22
> To: users@qpid.apache.org
> Subject: [Qpid-proton-cpp] Performance regression found in 0.29.0
>
> Hello,
>
> We are upgrading in our code the proton version from 0.27.0 to 0.29.0.
> While running our unit tests, we found a considerable performance
> regression.
>
> We were able to reproduce the regression in a very simple use case.
> Please find the code attached.
>
> This test takes 1 ms in the version 0.27.0 and 0.28.0 but it takes 73 ms
> in 0.29.0 .
>
> Do you know what might be the cause?
> We will try to investigate in parallel from our side, too.
>
> Thanks,
> Rabih & Ali
> ***
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorized to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
diff --git a/cpp/src/connection_options.cpp b/cpp/src/connection_options.cpp
index 2bf281d1..b61c2c68 100644
--- a/cpp/src/connection_options.cpp
+++ b/cpp/src/connection_options.cpp
@@ -175,7 +175,8 @@ class connection_options::impl {
 }
 } else if (!client && ssl_server_options.set) {
 pn_ssl_t *ssl = pn_ssl(pnt);
-if (pn_ssl_init(ssl, ssl_server_options.value.impl_->pn_domain(), 
NULL)) {
+pn_ssl_domain_t* ssl_domain = ssl_server_options.value.impl_ ? 
ssl_server_options.value.impl_->pn_domain() : NULL;
+if (pn_ssl_init(ssl, ssl_domain, NULL)) {
 throw error(MSG("server SSL/TLS initialization error"));
 }
 }
diff --git a/cpp/src/ssl_options.cpp b/cpp/src/ssl_options.cpp
index bd4d5c17..f74f014e 100644
--- a/cpp/src/ssl_options.cpp
+++ b/cpp/src/ssl_options.cpp
@@ -99,7 +99,7 @@ ssl_server_options::ssl_server_options(
 throw error(MSG("SSL server configuration failure requiring client 
certificates using " << db));
 }
 
-ssl_server_options::ssl_server_options() : impl_(new impl) {}
+ssl_server_options::ssl_server_options() : impl_(0) {}
 
 ssl_client_options::ssl_client_options(const ssl_client_options& x): 
impl_(x.impl_) {
 if (impl_) impl_->incref();

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

[Qpid-proton-cpp] Performance regression found in 0.29.0

2019-11-13 Thread Rabih M
Hello,

We are upgrading in our code the proton version from 0.27.0 to 0.29.0.
While running our unit tests, we found a considerable performance
regression.

We were able to reproduce the regression in a very simple use case.
Please find the code attached.

This test takes 1 ms in the version 0.27.0 and 0.28.0 but it takes 73 ms in
0.29.0 .

Do you know what might be the cause?
We will try to investigate in parallel from our side, too.

Thanks,
Rabih & Ali
#include 
#include 
#include 

#include 

class handler : public proton::messaging_handler {

public:
   handler(const std::string& u) : url(u) {}
private:
   void on_container_start(proton::container& c)override 
   {
  c.connect(url, proton::connection_options().sasl_enabled(false));
   }

   void on_connection_open(proton::connection& c)override 
   {
  c.close();
   }

   std::string url;
};


int main()
{
   try
   {
  handler h("127.0.0.1:1234"); //wrong port
  proton::container(h).run();
   }
   catch (std::exception e)
   {
  std::cout << "Exception thrown at the client side: " << e.what();
   }
   return 0;
}


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

Re: multicast without consumers

2019-10-30 Thread Rabih M
Hello,

Thanks for the prompt responses.

Just for context:
The version we are using in production is the Qpid dispatch 1.5.0.
Our architecture consists of a dispatch router which is a facade for all
consumers and producers. Behind the dispatch router, we have one or
multiple brokers.
The producers connect to the dispatch router only. They publish their
messages in a topic on the broker (via an autolink on the dispatch).
The broker routes internally the messages (through binding keys) to
specific queues.
The dispatch router prefetches the messages from those queues on a
different autolink, to deliver them later in multicast mode to the
consumers.
Then the consumers connect to the dispatch to receive the messages.

The problem is:
When there are no more consumers attached to the dispatch and we have still
messages on the broker queue, we have a cycle: - issuing credit - message
transfer - then a release. This is done thousands of times per second
between the broker-j and the dispatch, thus saturating CPU and network
ressources. We attached a sample of the dispatch logs in trace mode where
you can see clearly 4 cycles in approximately 1.2 milliseconds.

>From what we understood from the doc and your emails, this problem also
exists on the latest version (we haven't tested it yet).
The dispatch router will always release the message then issue a credit if
there are no consumers connected (the deadly loop).
Is our understanding correct?

We are trying to do a patch to break the cycle: we do not issue credits
when we release a message if there are no consumers connected.
But apparently this is not aligned with your vision on how multicast should
work (DISPATCH-779).
Any suggestions of what can be done?

Thanks,
Jeremy and Rabih


On Wed, Oct 30, 2019 at 3:41 PM Ted Ross  wrote:

> On Tue, Oct 29, 2019 at 6:23 PM VERMEULEN Olivier <
> olivier.vermeu...@murex.com> wrote:
>
> > Hello,
> >
> > Yes the waypoint address (in from broker) is using a multicast
> > distribution.
> > Unfortunately skipping the broker is not an option for us right now.
> > Our whole architecture relies on the broker to guarantee that no messages
> > will ever be lost...
> > For information we're asking for a quick workaround because we're facing
> > this problem on a client production environment...
> >
>
> Are you looking for a patch you can apply locally to work around your
> issue?
>
>
> >
> > Thanks,
> > Olivier
> >
> > -Original Message-
> > From: Ken Giusti 
> > Sent: mardi 29 octobre 2019 18:07
> > To: users 
> > Subject: Re: multicast without consumers
> >
> > On Tue, Oct 29, 2019 at 11:54 AM jeremy  wrote:
> >
> > > Hello Gordon,
> > >
> > > We debugged the dispatch router, and fell on the code which releases
> > > undeliverable messages(
> > > https://github.com/apache/qpid-dispatch/blob/1.5.0/src/router_core/tra
> > > nsfer.c#L869
> > > ).
> > >
> > > Check the comment on line 879. It states that if the distribution is
> > > multicast, the credit will be replenished after the release. The issue
> > > that introduced this behavior is:
> > > https://issues.apache.org/jira/browse/DISPATCH-1012
> > >
> > >
> > Is the waypoint address (in from broker) using multicast distribution?
> >
> > The router treats multicast addresses like topics - you can publish to a
> > multicast address (topic) regardless of the presence of consumers.
> That's
> > the reason credit is being replenished even when no consumers are
> present.
> >
> > That's probably what's happening here - broker sends first queued message
> > to the router, which attempts to send it to the topic.   Since there are
> no
> > consumers (and the message is sent from the broker as unsettled) the
> > router cannot deliver it so it returns the released status.  The released
> > status causes the broker to redeliver the message. Repeat.
> >
> >
> >
> >
> > > In fact, we need an urgent fix/workaround for this. Perhaps there is a
> > > quick workaround, awaiting the full analysis of this problem?
> > >
> > >
> > As a work around can you avoid sending these multicast messages to the
> > broker queue?  In other words send them directly to the router instead of
> > using a waypoint?
> >
> >
> >
> > > Thanks
> > >
> > >
> > >
> > >
> > > -
> > > Cheers,
> > > Jeremy
> > > --
> > > Sent from:
> > > http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html
> > >
> > > -
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For
> > > additional commands, e-mail: users-h...@qpid.apache.org
> > >
> > >
> >
> > --
> > -K
> > ***
> > This e-mail contains information for the intended recipient only. It may
> > contain proprietary material or confidential information. If you are not
> > the intended recipient you are not authorized to distribute, copy or use
> > this e-mail or any attachment to it. Murex cannot guarantee that it is
> > virus free and accepts no 

Re: Uneven distribution of messages

2019-07-22 Thread Rabih M
Hello Ted,

We cannot use the link-routes because we need to have multiple consumers
connected to the same queue.

Having multiple queues depending on the task execution time can be a good
solution.
We are thinking about another solution too:
To have consumer that receives all the messages and dispatches them to the
free workers and at the same time it can monitor the load to upscale and
downscale the workers depending on the work received...

Thanks for your help,
Best regards,
Rabih

On Fri, Jul 19, 2019 at 6:02 PM Ted Ross  wrote:

> On Fri, Jul 19, 2019 at 7:50 AM Rabih M  wrote:
>
> > Hello,
> >
> > Yes this is one of our production use cases, where we have low throughput
> > but the processing of the messages can take from seconds to minutes
> > (financial calculations). This is why it was bothering us to have an idle
> > consumer while another is overloaded.
> >
>
> You should consider using link-routes for your consumers.  Link-routes
> trade off the load balancing for tight control of credit end-to-end.  If a
> consumer is link-routed to a broker, the broker won't deliver a message on
> that link until the consumer issues a credit,  and then it will only
> delivery as many messages as credits were issued.
>
>
> >
> > I understand that for the performance, we need to prefetch messages from
> > broker to make them available in the dispatch, but what i did not
> > understand is why inside a router we need to assign the prefetched
> messages
> > to a consumer connection and not waiting until the connected consumer
> > issues a credit, knowing that from a performance point of view, the
> costly
> > action of "prefetching the messages through IO calls" was made.
> > Is it because the complexity of the routing algorithm and the
> > communications between the dispatch routers will increase?
> >
> >
> When messages are sent into the router network, they are immediately routed
> to a destination.  The routers don't hold messages for later routing.
> Also, the synchronization of real-time credit state across a network for
> all addresses is not practical or scalable.
>
>
> > Last point, we did the following test:
> > A dispatch router have a connector with a LinkCapacity=250 connected to a
> > broker and a listener with a LinkCapacity=1 connected to a consumer.
> >
>
> The important link capacity in this scenario is the 250 as it controls the
> router pre-fetch.  The consumer's link capacity of 1 is not relevant to
> this case.
>
>
> >
> > [image: Untitled Diagram.jpg]
> >
>
> I can't see the diagram, but I think I get the idea.
>
>
> >
> >
> > 1- the router 1 prefetches 250 message from the broker
> > 2- the consumer issues a credit
> > 3- the consumer receives a message from the router but does not
> acknowledge
> > 4- the consumer issues another credit
> > 5- the consumer receives a message from the router but does not
> > acknowledge again
> > Steps 4 and 5 can be repeated until the 250 msgs are all transferred to
> > the consumer
> >
>
> This is consistent with there being only one consumer for the address on
> the network at the time the broker sent the 250 messages.
>
>
> >
> > Is this an expected behavior? consumer 1 should not acknowledge before he
> > can receive another message knowing that the link capacity of the
> listener
> > is 1?
> >
>
> Best practice for acknowledgement is for the consumer to acknowledge
> (settle) immediately after finishing the processing of the message (i.e.
> once that message is no longer consuming memory or compute resources on the
> host).  This causes the settlement state of deliveries to be directly
> related to consumer resources.  Again, the link capacity of 1 is not having
> any effect on the behavior of this scenario.
>
>
> >
> > Thanks for your explanations and help,
> >
>
> Am I to understand that your case is this?  You have a distributed work
> queue in which the time-to-process is highly variable.  Some messages are
> processed quickly and others take much longer.  You don't want to incur the
> longer latency on messages that can be handled quickly if there are many
> more fast messages than slow messages.
>
> Is it possible to know beforehand which messages are going to take long?
> Could you put these on a different queue with a different address?
>
>
> > Best regards,
> > Rabih
> >
> >
> > On Wed, Jul 17, 2019 at 6:46 PM Ted Ross  wrote:
> >
> >>
> >>
> >> On Wed, Jul 17, 2019 at 12:00 PM Rabih M 
> wrote:
> >>
> >>> Hello,
> >

Re: Uneven distribution of messages

2019-07-19 Thread Rabih M
Hello,

Yes this is one of our production use cases, where we have low throughput
but the processing of the messages can take from seconds to minutes
(financial calculations). This is why it was bothering us to have an idle
consumer while another is overloaded.

I understand that for the performance, we need to prefetch messages from
broker to make them available in the dispatch, but what i did not
understand is why inside a router we need to assign the prefetched messages
to a consumer connection and not waiting until the connected consumer
issues a credit, knowing that from a performance point of view, the costly
action of "prefetching the messages through IO calls" was made.
Is it because the complexity of the routing algorithm and the
communications between the dispatch routers will increase?

Last point, we did the following test:
A dispatch router have a connector with a LinkCapacity=250 connected to a
broker and a listener with a LinkCapacity=1 connected to a consumer.

[image: Untitled Diagram.jpg]


1- the router 1 prefetches 250 message from the broker
2- the consumer issues a credit
3- the consumer receives a message from the router but does not acknowledge
4- the consumer issues another credit
5- the consumer receives a message from the router but does not acknowledge
again
Steps 4 and 5 can be repeated until the 250 msgs are all transferred to the
consumer

Is this an expected behavior? consumer 1 should not acknowledge before he
can receive another message knowing that the link capacity of the listener
is 1?

Thanks for your explanations and help,
Best regards,
Rabih


On Wed, Jul 17, 2019 at 6:46 PM Ted Ross  wrote:

>
>
> On Wed, Jul 17, 2019 at 12:00 PM Rabih M  wrote:
>
>> Hello,
>>
>> We tested with LinkCapacity equal to 1 on the "normal" listener with
>> debug level trace+, here are our findings:
>> Our Cluster:
>> [image: Diagram.jpg]
>> We are using the broker-j, the consumer are connected to the dispatch
>> routers before we start sending.
>>
>> For use case 1:
>> 1- the producer sends a message.
>> 2- Consumer 1 issues one credit, receives the message without
>> acknowledging.
>> 3- the producer sends another message.
>> 4- Consumer 2 in auto-ack mode issues one credit and receives the message.
>> 5- we repeated steps 3 and 4 ten times.
>> 6- Consumer 1 acknowledges.
>> The results were correct: all the messages were correctly distributed to
>> the idle consumer.
>>
>> For use case 2:
>> 1- the producer send 10 messages while no credits were issued yet by the
>> consumers.
>> 2- Consumer 1 issues one credit, receives a message without acknowledging.
>> 3- Consumer 2 in auto-ack mode issues one credit and times out after 5
>> seconds if nothing is received.
>> 4- we repeated step 3 eight times.
>> 5- Consumer 1 acknowledges.
>> The results were not as expected: 4 messages were blocked in the outbound
>> queue of the consumer 1 and consumer 2 was able to receive only 5 messages.
>> We analysed the traces to follow the messages. We found that 4 messages
>> were blocked in the dispatch 1.
>> Conclusion: if no consumers are issuing credits (are busy) then the
>> incoming messages will be pre-assigned automatically by the dispatch router
>> to the listeners (in a round robin way?).
>>
>> Is it an expected behavior in the dispatch router? is it not supposed to
>> wait for a credit to be issued before binding the message to an outbound
>> queue?
>>
>
> Yes, this is the expected behavior.  The router does not propagate each
> individual credit from receiver to sender.  It would be impractical to do
> so, would not scale well, and probably still wouldn't provide the behavior
> you expect.  What the router does is to use delivery settlement as the way
> to control credit flow to producers.  If the link capacity is 250, each
> producer will be limited to 250 unsettled deliveries at any time.  As the
> deliveries are settled, more credit is issued.  This scales well in large
> networks, keeps a limit on the memory consumed by deliveries, and allows
> for high delivery rates.
>
> Is this a real use case or are you experimenting to learn how the router
> works?
>
> Under steady state flow, the messages will be delivered to the consumers
> in proportion to the rate at which the consumers acknowledge (settle)
> deliveries.  If a consumer attaches a receiving link but withholds credit,
> the router network will route deliveries to that consumer in anticipation
> of credit being issued.  It is an anti-pattern to attach a receiving link
> and stop issuing credit.  Credit should be used to control the rate of
> delivery.  If you want to stop delivery, detach the rec

Re: Uneven distribution of messages

2019-07-17 Thread Rabih M
Hello,

We tested with LinkCapacity equal to 1 on the "normal" listener with debug
level trace+, here are our findings:
Our Cluster:
[image: Diagram.jpg]
We are using the broker-j, the consumer are connected to the dispatch
routers before we start sending.

For use case 1:
1- the producer sends a message.
2- Consumer 1 issues one credit, receives the message without acknowledging.
3- the producer sends another message.
4- Consumer 2 in auto-ack mode issues one credit and receives the message.
5- we repeated steps 3 and 4 ten times.
6- Consumer 1 acknowledges.
The results were correct: all the messages were correctly distributed to
the idle consumer.

For use case 2:
1- the producer send 10 messages while no credits were issued yet by the
consumers.
2- Consumer 1 issues one credit, receives a message without acknowledging.
3- Consumer 2 in auto-ack mode issues one credit and times out after 5
seconds if nothing is received.
4- we repeated step 3 eight times.
5- Consumer 1 acknowledges.
The results were not as expected: 4 messages were blocked in the outbound
queue of the consumer 1 and consumer 2 was able to receive only 5 messages.
We analysed the traces to follow the messages. We found that 4 messages
were blocked in the dispatch 1.
Conclusion: if no consumers are issuing credits (are busy) then the
incoming messages will be pre-assigned automatically by the dispatch router
to the listeners (in a round robin way?).

Is it an expected behavior in the dispatch router? is it not supposed to
wait for a credit to be issued before binding the message to an outbound
queue?
If you want to check the paths of the messages, I attached the routers logs
of the use case 2.

Best regards,
Rabih

On Mon, Jul 15, 2019 at 8:14 PM Ganesh Murthy  wrote:

> On Mon, Jul 15, 2019 at 1:26 PM Rabih M  wrote:
>
> > Hello,
> >
> > We are testing with the trace+ log. We will brief you of the results when
> > they are ready.
> >
> > Our goal is not to load balance equally the messages between the
> consumers
> > but we would like the dispatch router to send the message to the free
> > consumer.
> >
>
> What you did (setting the linkCapacity on the producer and consumer
> listeners to 1) is the right thing to do if you want the router
> to send the message to *any* free consumer.
>
> If Consumer C1 is processing the first message and does not issue the next
> credit until it finishes processing the first message and if the second
> message arrives to Router 1 when C1 is
> still processing the message, then Router 1 will definitely forward the
> message to Router 2. BUT if C1 is fast enough and ends up processing the
> first message and immediately issues credit, the second message if it
> arrives in Router 1 will also
> be sent to C1 (because the router prefers local consumers).
>
> Remember that the key here is which Router ends up getting the message
> since each broker has two autoLinks to both routers and we don't know which
> autoLink the broker will choose.
>
> But overall, with your new configuration, the router will send the message
> to a consumer that is not busy no further configuration is necessary.
>
> Specially if the consumer does a long calculation, we do not want to block
> > the message in the outbound queue knowing there is another idle consumer.
> >
> > Is there any special configuration for this?
> >
> > Best regards,
> > Rabih
> >
> >
> > On Fri, Jul 12, 2019 at 7:26 PM Ganesh Murthy 
> wrote:
> >
> > > On Fri, Jul 12, 2019 at 11:29 AM Rabih M 
> > wrote:
> > >
> > > > Hello,
> > > >
> > > > Thank you Ganesh for your answer.
> > > > To answer your first question yes the consumers we have, issue one
> > credit
> > > > per call.
> > > > For the the second question, yes we are attaching consumers before
> > > sending
> > > > the messages.
> > > >
> > > > After setting the dispatch "link capacity" on the listeners to 1, we
> > > > managed to have an equal load-balancing of the messages between
> > consumer
> > > 1
> > > > and 2.
> > > > Our understanding is that the problem comes from the prefetch at the
> > > level
> > > > of the "normal" listener, which is preventing the "inter-router"
> > listener
> > > > to pass messages to the second dispatch.
> > > >
> > > Just want to note here that the linkCapacity on the inter-router
> > listeners
> > > are ignored. So setting linkCapacity to 1 on that listener has no
> effect.
> > >
> > > This is what I think is happening
> > > 1. The Router 1 (R1)

Re: Uneven distribution of messages

2019-07-15 Thread Rabih M
Hello,

We are testing with the trace+ log. We will brief you of the results when
they are ready.

Our goal is not to load balance equally the messages between the consumers
but we would like the dispatch router to send the message to the free
consumer.
Specially if the consumer does a long calculation, we do not want to block
the message in the outbound queue knowing there is another idle consumer.

Is there any special configuration for this?

Best regards,
Rabih


On Fri, Jul 12, 2019 at 7:26 PM Ganesh Murthy  wrote:

> On Fri, Jul 12, 2019 at 11:29 AM Rabih M  wrote:
>
> > Hello,
> >
> > Thank you Ganesh for your answer.
> > To answer your first question yes the consumers we have, issue one credit
> > per call.
> > For the the second question, yes we are attaching consumers before
> sending
> > the messages.
> >
> > After setting the dispatch "link capacity" on the listeners to 1, we
> > managed to have an equal load-balancing of the messages between consumer
> 1
> > and 2.
> > Our understanding is that the problem comes from the prefetch at the
> level
> > of the "normal" listener, which is preventing the "inter-router" listener
> > to pass messages to the second dispatch.
> >
> Just want to note here that the linkCapacity on the inter-router listeners
> are ignored. So setting linkCapacity to 1 on that listener has no effect.
>
> This is what I think is happening
> 1. The Router 1 (R1) and Router 2 (R2) are started and each of them connect
> to both Brokers, B1 and B2 and establish the autolinks. As soon as inbound
> autoLinks (inbound to the router from the broker) are established on
> address myQueue, the broker(s) immediately provides initial credit on both
> autoLinks. This initial credit is usually 500 or 1000 depending on the
> broker you are using.
> 2. Now Consumer 1 (C1) connects to R1 and Consumer 2 (C2) connects to R2
> providing one credit each respectively to each router.
> 3. Now the Producer connects to R1 and sends one message M1 (it cannot send
> more than one message because the linkCapacity is set to 1). Assuming M1
> goes to B1 (it has an equal chance of ending up in B2 as well), B1  sees
> that there are 2 consumers (autolinks) for that message.Say B1 sends M1  to
> R1, M1 will be sent to C1.
> 4. Producer produces M2 and it goes to B2. B2 might send it to R1 or R2.
>  4a. If B2 sends M2 to R1, the message might go over the
> inter-router link  to R2 to C2 because there is no credit to send to C1.
>  4b. If B2 sends M2 to R2, then C2 gets M2  because routers prefer
> local consumers to remote consumers.
>   4c. If C1 is fast enough to consume M1 and replenish the credit
> and if B2 ends up sending M2 to R1, C1 might end up getting the second
> message as well.
>   4d. You need to also find out if the brokers load balances across
> autolinks.
>
> In conclusion, you are lucky that the messages are being load balanced
> across C1 and C2 when you changed linkCapacity to 1 but it is making it
> more likely. This might not always happen.
>
> Turn on trace logging on the routers to see which route the messages are
> taking. Try the scenario over and over again.
>
> To turn on trace logging add the following to the router config file -
>
> log {
> module: DEFAULT
> enable: trace+
> output: path/to/qdrouterd.log
> }
>
> If you want true load balancing between the consumers, put a router R3 in
> front of R1 and R2 and connect both consumers  to R3. The producer can
> connect to any router.
>
> >
> > You can find attached the new config files.
> >
> > Is this the correct way to resolve this kind of problem? Does it sound
> > reasonable to you?
> >
> > Best regards,
> > Rabih and Ali
> >
> > On Thu, Jul 11, 2019 at 5:50 PM Ganesh Murthy 
> wrote:
> >
> >>
> >>
> >> On Thu, Jul 11, 2019 at 10:37 AM Rabih M 
> wrote:
> >>
> >>> Hello,
> >>>
> >>> We are using Qpid dispatch router 1.7, Qpid broker 7.1.3 on redhat
> linux
> >>> rhel 6.4.
> >>>
> >>> Use case description:
> >>> We have a cluster of 2 dispatch routers and 2 brokers. A sharded queue
> >>> "myQueue" on the 2 brokers.
> >>> Here is an illustration:
> >>>
> >>> [image: Diagram.jpg]
> >>>
> >>> The producer produces 2 messages, the messages are load balanced on the
> >>> 2 brokers. Then, Consumer 1 and consumer 2 ask for a receive().
> >>>
> >> Does your receive() function issue one credit per call?
> >

Re: Uneven distribution of messages

2019-07-12 Thread Rabih M
Hello,

Thank you Ganesh for your answer.
To answer your first question yes the consumers we have, issue one credit
per call.
For the the second question, yes we are attaching consumers before sending
the messages.

After setting the dispatch "link capacity" on the listeners to 1, we
managed to have an equal load-balancing of the messages between consumer 1
and 2.
Our understanding is that the problem comes from the prefetch at the level
of the "normal" listener, which is preventing the "inter-router" listener
to pass messages to the second dispatch.

You can find attached the new config files.

Is this the correct way to resolve this kind of problem? Does it sound
reasonable to you?

Best regards,
Rabih and Ali

On Thu, Jul 11, 2019 at 5:50 PM Ganesh Murthy  wrote:

>
>
> On Thu, Jul 11, 2019 at 10:37 AM Rabih M  wrote:
>
>> Hello,
>>
>> We are using Qpid dispatch router 1.7, Qpid broker 7.1.3 on redhat linux
>> rhel 6.4.
>>
>> Use case description:
>> We have a cluster of 2 dispatch routers and 2 brokers. A sharded queue
>> "myQueue" on the 2 brokers.
>> Here is an illustration:
>>
>> [image: Diagram.jpg]
>>
>> The producer produces 2 messages, the messages are load balanced on the 2
>> brokers. Then, Consumer 1 and consumer 2 ask for a receive().
>>
> Does your receive() function issue one credit per call?
>
>>
>> Our observation is that the consumer 1 consumes the first message and the
>> consumer 2 is never getting the second message.
>> We are aware that the first dispatch router will do a prefetch for the 2
>> messages but what is weird is that the prefetched message 2 is never routed
>> to the second dispatch router and consumer 2.
>> I attached the dispatch routers config.
>>
>> This is what is going on in my view -
> 1. The producer sends two messages - Message 1 goes to Broker 1 and
> Message 2 goes to Broker 2.
> 2. Now Consumer 1 attaches to Router 1 and calls receive() issuing one
> credit. (The Consumer 2 has not yet attached to Router 2). The autolinks
> from Broker 1 to Router 1 and Broker 2 to Router 2 each have a prefetch of
> 250 credits each. So,Message 1 and Message 2
> both come down to Router 1. Message 1 is sent to Consumer 1. Message 2 is
> waiting on the outbound queue of Consumer 1 waiting to receive a credit
> from Consumer 1.
> 3. Now Consumer 2 shows up and wants to receive Message 2 but that message
> is already queued up to go to Consumer 1 who is still keeping his
> connection open
> 4. If now Consumer 1 drops out, the Message 2 will be RELEASED back to its
> broker and that message will be sent to Consumer 2.
>
> Have you tried attaching Consumer 1 and Consumer 2 first and then
> subsequently bringing up the Producer 1 and send two messages ? In that
> case, I think, both consumers will receive one message.
>
> Thanks.
>
>> Do you have any idea how we can make consumer 2 receive the message?
>>
>> Thanks for your help,
>> Rabih
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>
>
router {
mode: interior
id: ${dispatch.router.name}
workerThreads: 4
defaultDistribution: unavailable
}

listener {
name: normal_listener
host: 0.0.0.0
port: 0
role: normal
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
linkCapacity: 1
}

listener {
name: inter_router_listener
host: 0.0.0.0
port: 0
role: inter-router
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
linkCapacity: 1
}

log {
module: DEFAULT
enable: info+
includeSource: false
outputFile: stdout
}

connector {
name: broker1
role: route-container
host: 127.0.0.1
port: ${broker1.amqp.port}
}

connector {
name: broker2
role: route-container
host: 127.0.0.1
port: ${broker2.amqp.port}
}

address {
prefix: myQueue
waypoint: yes
}

autoLink {
name: consume_myQueue_broker1
addr: myQueue
connection: broker1
direction: in
}

address {
prefix: myTopic
waypoint: yes
}

autoLink {
name: produce_myTopic_broker1
addr: myTopic
connection: broker1
direction: out
}

autoLink {
name: consume_myQueue_broker2
addr: myQueue
connection: broker2
direction: in
}

autoLink {
name: produce_myTopic_broker2
addr: myTopic
connection: broker2
direction: out
}
router {
mode: interior
id: ${dispatch.router.name}
workerThreads: 4
defaultDistrib

Re: Uneven distribution of messages

2019-07-11 Thread Rabih M
Re-attached conf files with .txt extension.


On Thu, Jul 11, 2019 at 4:36 PM Rabih M  wrote:

> Hello,
>
> We are using Qpid dispatch router 1.7, Qpid broker 7.1.3 on redhat linux
> rhel 6.4.
>
> Use case description:
> We have a cluster of 2 dispatch routers and 2 brokers. A sharded queue
> "myQueue" on the 2 brokers.
> Here is an illustration:
>
> [image: Diagram.jpg]
>
> The producer produces 2 messages, the messages are load balanced on the 2
> brokers. Then, Consumer 1 and consumer 2 ask for a receive().
>
> Our observation is that the consumer 1 consumes the first message and the
> consumer 2 is never getting the second message.
> We are aware that the first dispatch router will do a prefetch for the 2
> messages but what is weird is that the prefetched message 2 is never routed
> to the second dispatch router and consumer 2.
> I attached the dispatch routers config.
>
> Do you have any idea how we can make consumer 2 receive the message?
>
> Thanks for your help,
> Rabih
>
router {
mode: interior
id: router2
workerThreads: 4
defaultDistribution: unavailable
}

listener {
name: normal_listener
host: 0.0.0.0
port: 5673
role: normal
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
}

listener {
name: inter_router_listener
host: 0.0.0.0
port: 5683
role: inter-router
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
}

log {
module: DEFAULT
enable: info+
includeSource: false
outputFile: stdout
}

connector {
name: dispatch1
role: inter-router
host: 127.0.0.1
port: 5682
}

connector {
name: broker1
role: route-container
host: 127.0.0.1
port: 5670
}

connector {
name: broker2
role: route-container
host: 127.0.0.1
port: 5671
}

address {
prefix: myQueue
waypoint: yes
}

autoLink {
name: consume_myQueue_broker1
addr: myQueue
connection: broker1
direction: in
}

autoLink {
name: produce_myQueue_broker1
addr: myQueue
connection: broker1
direction: out
}

autoLink {
name: consume_myQueue_broker2
addr: myQueue
connection: broker2
direction: in
}

autoLink {
name: produce_myQueue_broker2
addr: myQueue
connection: broker2
direction: out
}router {
mode: interior
id: router1
workerThreads: 4
defaultDistribution: unavailable
}

listener {
name: normal_listener
host: 0.0.0.0
port: 5672
role: normal
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
}

listener {
name: inter_router_listener
host: 0.0.0.0
port: 5682
role: inter-router
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
}

connector {
name: broker1
role: route-container
host: 127.0.0.1
port: 5670
linkCapacity: 1
}

connector {
name: broker2
role: route-container
host: 127.0.0.1
port: 5671
linkCapacity: 1
}

address {
prefix: myQueue
waypoint: yes
}

autoLink {
name: consume_myQueue_broker1
addr: myQueue
connection: broker1
direction: in
}

autoLink {
name: produce_myQueue_broker1
addr: myQueue
connection: broker1
direction: out
}

autoLink {
name: consume_myQueue_broker2
addr: myQueue
connection: broker2
direction: in
}

autoLink {
name: produce_myQueue_broker2
addr: myQueue
connection: broker2
direction: out
}
-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

Uneven distribution of messages

2019-07-11 Thread Rabih M
Hello,

We are using Qpid dispatch router 1.7, Qpid broker 7.1.3 on redhat linux
rhel 6.4.

Use case description:
We have a cluster of 2 dispatch routers and 2 brokers. A sharded queue
"myQueue" on the 2 brokers.
Here is an illustration:

[image: Diagram.jpg]

The producer produces 2 messages, the messages are load balanced on the 2
brokers. Then, Consumer 1 and consumer 2 ask for a receive().

Our observation is that the consumer 1 consumes the first message and the
consumer 2 is never getting the second message.
We are aware that the first dispatch router will do a prefetch for the 2
messages but what is weird is that the prefetched message 2 is never routed
to the second dispatch router and consumer 2.
I attached the dispatch routers config.

Do you have any idea how we can make consumer 2 receive the message?

Thanks for your help,
Rabih

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

Re: [Proton-C] Discovery

2019-06-05 Thread Rabih M
Hello Alan,

Will your pull request "reconnect_update" be released in the next proton
release 0.29.0?
We are waiting for this dev to implement some features form our side.

We can help if needed...

Best regards,
Rabih

On Fri, May 3, 2019 at 5:29 PM Alan Conway  wrote:

> On Fri, May 3, 2019 at 4:52 AM HADI Ali  wrote:
>
> > Thanks a lot. This is exactly what we need for our custom discovery
> logic.
> > We are also thinking of updating the maxReconnectAttempts in the
> > messaging_handler::on_connection_open in order to have the equivalent of
> > the startupMaxReconnectAttempts in JMS. Do you think this will be
> feasible
> > with your dev?
> >
>
> I'd recommend setting the initial connection_options in
> container::connect() instead, and using reconnect_update() only for updates
> *during* a reconnect, i.e. in on_transport_error().
> It would probably work if you're careful but I'd be worried about potential
> for confusion with connection options over-writing each other in multiple
> different places.
>
>
>
> >
> > Thanks,
> > Ali
> >
> > From: Alan Conway 
> > Sent: jeudi 2 mai 2019 21:29
> > To: users@qpid.apache.org
> > Subject: Re: [Proton-C] Discovery
> >
> >
> >
> > On Thu, May 2, 2019 at 7:13 AM HADI Ali  > ali.h...@murex.com>> wrote:
> > Hello Alan,
> >
> > I am using Proton 0.27.0 and I am not finding a way to update the
> > reconnect options.
> > How can I use the connection.options().reconnect(reconnect_opts_)) you
> > proposed or something equivalent in order to update the reconnect options
> > after the connection is created ?
> >
> > I apologise - the ability to update the connection options was never
> > merged. I've submitted a pull request for review since I'm not working
> full
> > time on proton at the moment.
> > https://github.com/apache/qpid-proton/pull/181
> > With that change in place you will be able to do what you want, it should
> > be in the next release if there are no objections.  See the attached
> > example. The relevant part of the example is:
> >
> > void on_transport_error(proton::transport & t) OVERRIDE {
> > std::cout << "disconnected by: " << t.error() << std::endl;
> > static int n = 0;
> > // Use the initial failover list the first 10 times, then switch
> > to a new one.
> > if (n++ == 10) {
> > std::cout << "switching failover-list" << std::endl;
> > proton::connection_options co;
> > proton::reconnect_options ro;
> > ro.failover_urls({"badX","badY"});
> > co.reconnect(ro);
> > t.connection().reconnect_update(co); // Apply new options to
> > connection
> > }
> > if (n > 20) { exit(0); } // Give up after 20 reconnects
> > }
> >
> >
> >
> >
> > Regards,
> > Ali
> >
> > -Original Message-
> > From: Alan Conway mailto:acon...@redhat.com>>
> > Sent: mardi 30 avril 2019 21:11
> > To: users@qpid.apache.org<mailto:users@qpid.apache.org>
> > Subject: Re: [Proton-C] Discovery
> >
> > On Tue, Apr 30, 2019 at 8:25 AM HADI Ali  > ali.h...@murex.com>> wrote:
> >
> > > Hello Alan,
> > >
> > > Do you have any updates concerning the proposition to update the
> > > reconnect options in Proton-C?
> > > Is it planned and if yes do you have an idea when?
> > >
> >
> > The changes I describe below were released version 0.26, and are
> available
> > in the current release 0.27. You should be able to take advantage of them
> > now.
> >
> >
> > > Regards,
> > > Ali
> > >
> > > -Original Message-
> > > From: Alan Conway mailto:acon...@redhat.com>>
> > > Sent: jeudi 24 janvier 2019 23:24
> > > To: users@qpid.apache.org<mailto:users@qpid.apache.org>
> > > Subject: Re: [Proton-C] Discovery
> > >
> > > On Thu, Jan 24, 2019 at 8:28 AM Rabih M   > rabih.prom...@gmail.com>> wrote:
> > >
> > > > Hello,
> > > >
> > > > Knowing that the on_transport_error will be called only when the
> > > > max_reconnect is reached, the user will have to manage one reattempt
> > > > at a time. It will become too flexible,  the user will have to write
> > > > his own reconnect strategy with out reusing what was done already
> > > >

Re: [Proton Cpp] Random error on connection open on windows

2019-05-20 Thread Rabih M
Hello,

Any ideas about the mail below?
Do i create Jira issue?

Best Regards,
Rabih

On Mon, May 6, 2019 at 5:33 PM Rabih M  wrote:

> Hello,
>
> We are using proton cpp 0.27.1 with visual studio 2013.
>
> We have the attached Main.cpp that is hanging on line 58 on windows
> randomly.
> As if line 57 was not executed.
>
> For the working case we have the following logs :
> broker on_container_start
> client on_container_start
> [0260A1C0]:  -> AMQP
> [0260A1C0]:0 -> @open(16)
> [container-id="c77d9e59-2061-4804-850c-0d039a0df414", hostname="127.0.0.1",
> channel-max=32767]
> [0261FB50]:  <- AMQP
> [0261FB50]:0 <- @open(16)
> [container-id="c77d9e59-2061-4804-850c-0d039a0df414", hostname="127.0.0.1",
> channel-max=32767]
> [0261FB50]:  -> AMQP
> b[0260A1C0]:  <- AMQP
> roker on_connection_open
> [0261FB50]:0 -> @open(16)
> [container-id="292348be-1884-47e1-8a6c-3dd62cae7252", channel-max=32767]
> [0260A1C0]:0 <- @open(16)
> [container-id="292348be-1884-47e1-8a6c-3dd62cae7252", channel-max=32767]
> client on_connection_open
> [0260A1C0]:0 -> @close(24) [error=@error(29)
> [condition=:"amqp:connection:framing-error", description="connection
> aborted"]]
> [0260A1C0]:  <- EOS
> [0261FB50]:0 -> @close(24) [error=@error(29)
> [condition=:"amqp:connection:framing-error", description="connection
> aborted"]]
> [0261FB50]:  <- EOS
> [0261FB50]:  -> EOS
> broker on_transport_error
> Broker::on_transport_error: amqp:connection:framing-error: connection
> aborted
> client on_container_stop
>
> For the error case:
> broker on_container_start
> client on_container_start
>
> Is it a bug? on linux we do not have this problem.
>
> Best regards,
> Rabih & Jeremy
>


[Proton Cpp] Auto acknowledge for delivery is not guaranteed

2019-05-06 Thread Rabih M
Hello,

The flag proton::receiver_options().auto_accept(true), if set does not
guarantee that the user can't make any action on the delivery.
There is a race condition on who sends the request before, the user or
proton.

The following example fails randomly (The example is taken from the
imperative POC):
  Broker brk("//127.0.0.1:5672", "examples");
   std::vector msgs{ proton::message("msg1")};
   brk.injectMessages(msgs);
   {
  proton::Container cont;
  proton::Connection conn = cont.openConnection("//127.0.0.1:5672",
proton::connection_options());
  proton::Session sess = conn.openSession(proton::session_options());
  proton::Receiver rec = sess.openReceiver("examples",
proton::receiver_options()*.auto_accept(true)*);

  proton::Delivery del1 = rec.receive().get(); // should auto
acknowledge
  del1.reject();
   }

   ASSERT_EQ(1, brk.m_acceptedMsgs); // is 0 if reject is done first
   ASSERT_EQ(0, brk.m_rejectedMsgs); // is 1 if reject is done first

This can be easily handled on the proton cpp level: The user action can be
a no op. WDYT?
Used proton version: 0.27.1

Best regards,
Rabih


[Proton Cpp] Random error on connection open on windows

2019-05-06 Thread Rabih M
Hello,

We are using proton cpp 0.27.1 with visual studio 2013.

We have the attached Main.cpp that is hanging on line 58 on windows
randomly.
As if line 57 was not executed.

For the working case we have the following logs :
broker on_container_start
client on_container_start
[0260A1C0]:  -> AMQP
[0260A1C0]:0 -> @open(16)
[container-id="c77d9e59-2061-4804-850c-0d039a0df414", hostname="127.0.0.1",
channel-max=32767]
[0261FB50]:  <- AMQP
[0261FB50]:0 <- @open(16)
[container-id="c77d9e59-2061-4804-850c-0d039a0df414", hostname="127.0.0.1",
channel-max=32767]
[0261FB50]:  -> AMQP
b[0260A1C0]:  <- AMQP
roker on_connection_open
[0261FB50]:0 -> @open(16)
[container-id="292348be-1884-47e1-8a6c-3dd62cae7252", channel-max=32767]
[0260A1C0]:0 <- @open(16)
[container-id="292348be-1884-47e1-8a6c-3dd62cae7252", channel-max=32767]
client on_connection_open
[0260A1C0]:0 -> @close(24) [error=@error(29)
[condition=:"amqp:connection:framing-error", description="connection
aborted"]]
[0260A1C0]:  <- EOS
[0261FB50]:0 -> @close(24) [error=@error(29)
[condition=:"amqp:connection:framing-error", description="connection
aborted"]]
[0261FB50]:  <- EOS
[0261FB50]:  -> EOS
broker on_transport_error
Broker::on_transport_error: amqp:connection:framing-error: connection
aborted
client on_container_stop

For the error case:
broker on_container_start
client on_container_start

Is it a bug? on linux we do not have this problem.

Best regards,
Rabih & Jeremy
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 
#include 

struct ContainerHandler : public proton::messaging_handler
{
   void on_container_start(proton::container& c) override
   {
  std::cout << "client on_container_start" << std::endl;
  containerStarted.set_value();
   }

   void on_container_stop(proton::container&) override
   {
  std::cout << "client on_container_stop" << std::endl;
   }

   std::promise containerStarted;
};

struct ConnectionHandler : public proton::messaging_handler
{
   void on_connection_open(proton::connection&) override
   {
  std::cout << "client on_connection_open" << std::endl;
  connectionStarted.set_value();
   }

   std::promise connectionStarted;
};

int main()
{
   try {
  Broker brk("//127.0.0.1:5672", "examples");

  ContainerHandler containerHandler;
  proton::container cont(containerHandler);
  auto containerStopped = std::async(std::launch::async, [&]() {
 try {
cont.run();
 }
 catch (const std::exception& e) {
std::cout << "std::exception caught, message:" << e.what() << 
std::endl;
 }});
  containerHandler.containerStarted.get_future().get();

  ConnectionHandler connectionHandler;
  cont.connect("//127.0.0.1:5672", 
proton::connection_options().handler(connectionHandler));
  connectionHandler.connectionStarted.get_future().get();

  cont.stop();
  containerStopped.get();
   }
   catch (std::exception& e) {
  std::cerr << "Expected exception: " << e.what() << std::endl;
   }
   return 0;
}

#ifndef TEST_BROKER_HPP
#define TEST_BROKER_HPP

#include 
#include 
#include 

#include 

#include 
#include 
#include 
#include 

class listenerHandler : public proton::listen_handler
{
public:
   std::future getStartedFuture()
   {
  return m_containerStarted.get_future();
   }

private:
   void on_open(proton::listener&) override
   {
  m_containerStarted.set_value();
   }

   void on_error(proton::listener&, const std::string& s) override
   {
  
m_containerStarted.set_exception(std::make_exception_ptr(std::runtime_error(s)));
   }

   std::promise m_containerStarted;
};

class Broker : public proton::messaging_handler
{
public:
   Broker(const std::string& url, const std::string& destination)
  :m_url(url + "/" + destination)
   {
  m_brokerThread = std::thread([&]() {
 try
 {
proton::container(*this).run();
 }
 catch (const std::exception& e) {
std::cerr << "Broker threw exception: " << e.what() << std::endl;
 }});

  // Wait for the container to start
  m_listenerHandler.getStartedFuture().get();
   }

   ~Broker()
   {
  if (!m_isClosed) {
 m_listener.stop();
  }
   }

   void injectMessages(std::vector messages)
   {
  m_messages.insert(m_messages.end(), messages.begin(), messages.end());
   }

   int m_acceptedMsgs = 0;
   int m_rejectedMsgs = 0;
   int m_releasedMsgs = 0;

private:
   void on_container_start(proton::container ) override
   {
  std::cout << "broker on_container_start" << std::endl;
  c.receiver_options(proton::receiver_options());
  m_listener = c.listen(m_url, m_listenerHandler);
   }

   void on_connection_open(proton::connection ) override
   {
  std::cout << "broker on_connection_open" << std::endl;
  m_connection = c;
  c.open();
   }


Re: Managing the lifetime of a proton::delivery object

2019-04-30 Thread Rabih M
Hello Cliff,

After spending more time debugging, I think we found where the real problem
comes from:
I attached a simplified use case in the Main.cpp, Broker.hpp.
When we call delivery.work_queue().add() line 35 of Main.cpp, we get an
random segfault and if we replace it instead by a
connection.work_queue().add() everything comes back to normal.

I tried to think about what might be the source of the problem but i can't
find any ideas yet...

Best regards,
Rabih



On Fri, Apr 26, 2019 at 9:38 AM Cliff Jansen  wrote:

> Hi Jeremy,
>
> I haven't had a chance to parse your overall description to arrive at
> a reproducer of your segfault problem.
>
> However, the DeliverySettleTest.cpp code you provided runs correctly.
> The client view of the delivery is settled via
>
>   d.accept();  // line 34
>
> which results in the disposition frame including "settled=true" sent
> to the server (class Broker).  This also means that the client locally
> has determined that (in AMQP terminology) it is now "safe to forget"
> the disposition.  No changes to the disposition are legal after that,
> and no callbacks about the disposition would be expected either.
>
> On the server side, in both on_tracker_accept() and
> on_tracker_settle() you would find that t.settled() is true, as you
> would expect.  The call to
>
>   t.settle();  // line 147 Broker.hpp
>
> is important to release memory since auto_settle has been turned off.
> But proton knows that the peer has settled and forgotten the delivery.
> So it sensibly declines to send a disposition frame (about a forgotten
> entity) that would be ignored by the client as per the AMQP spec.
>
> Note that the above explanation is C++ specific.  If you use lower
> level C primitives, it is possible to accept a delivery without
> settling it until later and get closer to what you appear to be
> striving for in this test.
>
> However, I can't help thinking that relying on two wire transfers,
> which could be arbitrarily slow, is the mechanism you want to pursue
> to avoid the (lifecycle?) segfault you are trying to address.
>
> Perhaps you could post the actual code with the segfault problem for
> me to look at.
>
> On Thu, Apr 25, 2019 at 6:42 AM jeremy  wrote:
>
> > Hello,
> >
> > In the client API we're writing, we let a consumer consume messages
> > asynchronously, receiving a future of a delivery. In the receiver
> handler,
> > we keep track of deliveries in a list, as such:
> >
> >
> https://github.com/rabih-mourad/qpid-imperative-proton/blob/master/include/proton/imperative/Receiver.hpp#L59
> > The user can then get the delivery and accept/reject/release it:
> >
> >
> https://github.com/rabih-mourad/qpid-imperative-proton/blob/master/src/Delivery.cpp#L7
> >
> > The proton delivery operations (accept/reject/release) are asynchronous
> > (added on the work queue:
> >
> >
> https://github.com/apache/qpid-proton/blob/7c0a3387a5096d86541dbddfeb55f36eb0b85dd8/c/src/core/engine.c#L732
> > )
> > unless I'm mistaken. In the above delivery code, removing the delivery
> > object from the list as soon as we go out of scope
> > (
> >
> https://github.com/rabih-mourad/qpid-imperative-proton/blob/master/src/Delivery.cpp#L11
> > )
> > results in a random segfault, since at the time the accept is called on
> the
> > delivery in the work queue, the delivery would have been already removed
> > from the list.
> > As a solution, we thought of implementing on_delivery_settle on the
> > receiver's side, and removing the delivery there. However, we noticed
> that
> > on_delivery_settle is never called (delivery mode set to none on both
> > sender
> > and receiver sides, sender auto_settle set to false, receiver auto ack
> set
> > to false). I tested with all delivery modes on both sides, and the
> > on_delivery_settle was never called. I attached the corresponding code.
> >
> > Got the following log:
> >
> > broker on_container_start
> > [0x1021a70]:  -> AMQP
> > [0x1021a70]:0 -> @open(16)
> > [container-id="b7cdef05-f195-4760-b262-655d538f0419",
> hostname="127.0.0.1",
> > channel-max=32767]
> > [0x7f719c0032d0]:  <- AMQP
> > [0x7f719c0032d0]:0 <- @open(16)
> > [container-id="b7cdef05-f195-4760-b262-655d538f0419",
> hostname="127.0.0.1",
> > channel-max=32767]
> > broker on_connection_open
> > [0x7f719c0032d0]:  -> AMQP
> > [0x7f719c0032d0]:0 -> @open(16)
> > [container-id="64a68948-0d32-4c19-89c7-9d62c408e248", channel-max=32767]
> > [0x1021a70]:  <- AMQP
> > [0x1021a70]:0 <- @open(16)
> > [container-id="64a68948-0d32-4c19-89c7-9d62c408e248", channel-max=32767]
> > [0x1021a70]:0 -> @begin(17) [next-outgoing-id=0,
> > incoming-window=2147483647,
> > outgoing-window=2147483647]
> > [0x1021a70]:0 -> @attach(18)
> [name="aacbdc5d-0b8e-474c-a9e7-bf0c5a1bbd25",
> > handle=0, role=true, snd-settle-mode=2, rcv-settle-mode=0,
> > source=@source(40) [address="examples", durable=0, timeout=0,
> > dynamic=false], target=@target(41) [durable=0, timeout=0, dynamic=false],
> > initial-delivery-count=0, 

Re: Ref counting in proton C

2019-04-24 Thread Rabih M
Hello,

We manage to reproduce the bug in a simplified use case.
Please find attached the "Broker.hpp" along with the test "Main.cpp".
The usage was inspired from the proton C++ multi-threaded examples and the
doc:
https://github.com/apache/qpid-proton/blob/master/cpp/docs/mt.md#thread-safety-rules
.
Each of container, connection, and session have different handlers.
The connection and session handlers are destroyed when they go out of scope.
Handlers hold a copy on the proton objects as members.

If we run the test without adding the line 142 (which releases the proton
session) we have race condition that leads to a segfault.

>From the cores, it is always segfaulting on the ref counting.
This is why we proposed the atomic types in C11.

Best regards.
Jeremy and Rabih


On Tue, Apr 23, 2019 at 9:28 AM Cliff Jansen  wrote:

> I am of the same opinion as Gordon that fixing the counting (to be atomic)
> will only address the current symptom without preventing other races that
> will present themselves later or on other hardware architectures.
>
> Having two threads participating in Proton object reference counting is
> problematic as you can't guarantee which ones will end up running the
> finalizers (or resurectors if the reference count increases during
> finalization).
>
> The simplest way to ensure thread safety is to abstain from all direct
> (except for the handful of documented thread-safe interfaces) and indirect
> calls into the Proton engine from any non-callback thread.  In C++ the
> latter is more difficult, as there can be hidden reference counting if you
> have Proton objects in your application objects.  There you have be certain
> that you never have such objects being copied or destroyed in a
> non-callback thread.  Alternatively, you can use dumb pointers to the
> Proton objects and use some application mechanism (mutex, condition
> variable...) to allow the non-callback thread to know when the pointer to
> the Proton object is valid.
>
> Cliff
>
> On Mon, Apr 22, 2019 at 1:49 AM Gordon Sim  wrote:
>
> > On 19/04/2019 11:02 am, Rabih M wrote:
> > > We are only using proton objects in the handler, however, at
> destruction
> > > time, there is a race condition on the reference counter between the
> > proton
> > > objects from the main thread that are being destroyed, and the proton
> > > objects held by the proton thread.
> >
> > Can you elaborate a little on this? Access to (or destruction of)
> > objects from distinct connection should not need to be coordinated,
> > though objects for the same connection clearly have to be.
> >
> > Is the issue you are seeing related to some global state (e.g. sasl
> > destruction)?
> >
> > What calls are being made that are in a race? Is this plain C or c++
> > with implicit calls being made by destructors?
> >
> > > The reference counter not being atomic is very error prone for users.
> > > How about transforming the int to an atomic int (already provided in
> C11
> > > standard: https://en.cppreference.com/w/c/language/atomic)?
> >
> > My initial thought is that just making the counter atomic would not be
> > sufficient for general races, so it would be important to understand
> > what specific uses cases this was designed to protect.
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 
#include 
#include 
#include 


struct Container : public proton::messaging_handler {
   Container(std::function onContainerStarted, std::function 
onError) : 
  m_onContainerStarted(onContainerStarted),
  m_onError(onError) {
   }

   void on_container_start(proton::container&) override {
  m_onContainerStarted();
   }

   void on_error(const proton::error_condition&) override {
  m_onError();
   }

   std::function m_onContainerStarted;
   std::function m_onError;
};

struct Connection : public proton::messaging_handler {
   Connection(std::function onConnectionOpened, 
std::function onConnectionClosed, std::function onError) :
  m_onConnectionOpened(onConnectionOpened),
  m_onConnectionClosed(onConnectionClosed),
  m_onError(onError){
   }

   void on_connection_open(proton::connection& connection) override {
  m_connection = connection;
  m_onConnectionOpened(m_connection);
   }

   void on_connection_close(proton::connection&) override {
  m_onConnectionClosed();
   }

   void on_connection_error(proton::connection&) over

Ref counting in proton C

2019-04-19 Thread Rabih M
Hello,

We are using proton 0.27.0.
We are facing segfaults due to race conditions on the ref counting
mechanism of proton C at destruction time.

We are only using proton objects in the handler, however, at destruction
time, there is a race condition on the reference counter between the proton
objects from the main thread that are being destroyed, and the proton
objects held by the proton thread.

The reference counter not being atomic is very error prone for users.
How about transforming the int to an atomic int (already provided in C11
standard: https://en.cppreference.com/w/c/language/atomic)?

We can provide the patch if you are OK with it.

Best regards,
Rabih and Jeremy (Murex)


Re: proton cpp container.stop not fully thread safe

2019-04-03 Thread Rabih M
Hello,

Concerning this point, i still have the same crash even after modifying the
code to be thread safe.
I attached the corrected code.

If i add a sleep of 50 ms before stopping the container everything works
correctly.

I think there is a concurrency problem somewhere in the proton code. Do you
confirm?

Best regards,
Rabih

On Wed, Mar 27, 2019 at 5:31 PM Rabih M  wrote:

> Hello,
>
> I am using proton cpp version 0.27.0.
> I have a random crash when calling stop on the container after closing the
> connection.
> Please find the attached the code and the stack.
>
> In the doc, it is said that the container object is thread safe but
> apparently there is flaw...
>
> Can you please advise?
>
> Do you need me to create a Jira issue?
>
>
> Best regards,
>
> Rabih
>
#include 
#include 
#include 

#include 
#include 

class handler : public proton::messaging_handler {

public:
   handler(const std::string& u) : m_url(u) {}

   proton::connection m_con;
   std::promise m_startProm;
   std::promise m_closeProm;

private:
   void on_connection_open(proton::connection& c)override{
   m_con = c;
   m_startProm.set_value();
   }
   void on_transport_close(proton::transport&)override {
  m_closeProm.set_value();
   }

   std::string m_url;
};


int main () {
{
   try {
  std::string url("//127.0.0.1:5672");
  handler h(url);
  proton::container c(h);
  c.auto_stop(false);
  std::thread clientThread([&]() {
 try
 {
c.run();
 }
 catch (const std::exception& e) {
std::cerr << "Broker threw exception: " << e.what() << std::endl;
 }});

  c.connect(url);
  h.m_startProm.get_future().get();
  h.m_con.work_queue().add([&]() { h.m_con.close(); });
  h.m_closeProm.get_future().get();
  c.stop();
  if (clientThread.joinable()) {
 clientThread.join();
  }
   }
   catch (std::exception& e) {
  std::cerr << e.what() << std::endl;
   }
   return 0;
}

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

Re: Proton cpp crashes on pn_class_decref

2019-04-03 Thread Rabih M
Hello,

I adapted the code like we said in the last meeting and it worked without
crashes.
So the rule that i deduce is that "we cannot copy any proton objet outside
of the handler's class".

I attached the corrected code for future reference.

Thanks,
Rabih

On Wed, Mar 27, 2019 at 5:19 PM Rabih M  wrote:

> Hello,
>
>
>
> I am using proton cpp version 0.27.0.
>
> We found a random failure in our code in proton sometime ago.
>
> I was able to reproduce using simple code against a java broker. Please
> find attached the code and the stack.
>
> The frequency is lower with the simplified code but we are still
> encountering the crash before reaching 60 000 repetitions.
>
>
>
> Murex will be using Proton and we need to ensure we have no such failure
> before using the code in production.
>
>
>
> Can you please advise?
>
> Do you need me to create a Jira issue?
>
>
>
> Best regards,
>
> Rabih
>
#include 
#include 
#include 

#include 
#include 

class handler : public proton::messaging_handler {

public:
   handler(const std::string& u) : m_url(u) {}

   proton::connection m_con;
   std::promise m_startProm;
   std::promise m_closeProm;

private:
   void on_connection_open(proton::connection& c)override{
   m_con = c;
   m_startProm.set_value();
   }
   void on_connection_close(proton::connection&)override{
  m_closeProm.set_value();
   }

   std::string m_url;
};


int main () {
{
   try {
  std::string url("//127.0.0.1:5672");
  handler h(url);
  proton::container c(h);
  std::thread clientThread([&]() {
 try
 {
c.run();
 }
 catch (const std::exception& e) {
std::cerr << "Broker threw exception: " << e.what() << std::endl;
 }});

  c.connect(url);
  h.m_startProm.get_future().get();
  h.m_con.work_queue().add([&]() { h.m_con.close(); });
  h.m_closeProm.get_future().get();
  if (clientThread.joinable()) {
 clientThread.join();
  }
   }
   catch (std::exception& e) {
  std::cerr << e.what() << std::endl;
   }
   return 0;
}

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

proton cpp container.stop not fully thread safe

2019-03-27 Thread Rabih M
Hello,

I am using proton cpp version 0.27.0.
I have a random crash when calling stop on the container after closing the
connection.
Please find the attached the code and the stack.

In the doc, it is said that the container object is thread safe but
apparently there is flaw...

Can you please advise?

Do you need me to create a Jira issue?


Best regards,

Rabih
#include 
#include 
#include 

#include 
#include 

class handler : public proton::messaging_handler {
public:
   handler(const std::string& u) 
   : m_url(u), m_work(0) 
   {}

   proton::work_queue* getWorkQueue() {
  if (!m_work)
 m_workProm.get_future().get();

  return m_work;
   }

   std::promise m_closeProm;

private:
   void on_connection_open(proton::connection& c)override{
   m_work = _queue();
   m_workProm.set_value();
   }

   void on_connection_close(proton::connection&)override{
  m_closeProm.set_value();
   }

   std::string m_url;
   proton::work_queue* m_work;
   std::promise m_workProm;
};

int main {
   try {
  std::string url("//127.0.0.1:5672");
  handler h(url);
  proton::container c(h);
  std::thread clientThread([&]() {
 try
 {
c.run();
 }
 catch (const std::exception& e) {
std::cerr << "Broker threw exception: " << e.what() << std::endl;
 }});

  proton::connection con = c.connect(url);
  h.getWorkQueue()->add([&]() {con.close(); });
  h.m_closeProm.get_future().get();
  c.stop();

  if (clientThread.joinable())
  {
 clientThread.join();
  }
   }
   catch (std::exception& e) {
  std::cerr << e.what() << std::endl;
   }
   return 0;
}
#0  pn_transport_tail_closed (transport=0x0) at 
/proton-workspace/qpid-proton-0.27.0/c/src/core/transport.c:3102
#1  0x7ff91c4450e9 in pconnection_rclosed (pc=0x183d790, events=Unhandled 
dwarf expression opcode 0xf3) at 
/proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:962
#2  pconnection_process (pc=0x183d790, events=Unhandled dwarf expression opcode 
0xf3) at /proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:1164
#3  0x7ff91c445cd2 in process_inbound_wake (p=0x1831ce0, can_block=true) at 
/proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:2092
#4  proactor_do_epoll (p=0x1831ce0, can_block=true) at 
/proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:2129
#5  0x7ff91d1acbec in proton::container::impl::thread (this=0x1830520) at 
/proton-workspace/qpid-proton-0.27.0/cpp/src/proactor_container_impl.cpp:736
#6  0x7ff91d1ad220 in proton::container::impl::run (this=0x1830520, 
threads=) at 
/proton-workspace/qpid-proton-0.27.0/cpp/src/proactor_container_impl.cpp:788
#7  0x0048114a in 
ContainerTest_rabih_Test::TestBody()::{lambda()#1}::operator()() const ()
#8  0x00482e42 in void 
std::_Bind_simple::_M_invoke<>(std::_Index_tuple<>) ()
#9  0x00482d87 in 
std::_Bind_simple::operator()() ()
#10 0x00482d04 in 
std::thread::_Impl >::_M_run() ()
#11 0x7ff91cf2d470 in ?? () from /usr/lib64/libstdc++.so.6
#12 0x7ff91d641851 in start_thread () from /lib64/libpthread.so.0
#13 0x7ff91c73290d in clone () from /lib64/libc.so.6

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

Proton cpp crashes on pn_class_decref

2019-03-27 Thread Rabih M
Hello,



I am using proton cpp version 0.27.0.

We found a random failure in our code in proton sometime ago.

I was able to reproduce using simple code against a java broker. Please
find attached the code and the stack.

The frequency is lower with the simplified code but we are still
encountering the crash before reaching 60 000 repetitions.



Murex will be using Proton and we need to ensure we have no such failure
before using the code in production.



Can you please advise?

Do you need me to create a Jira issue?



Best regards,

Rabih
#0  0x in ?? ()
#1  0x7fbb25e2a200 in pn_class_decref (clazz=0x257a100, object=0x25888f0) 
at /proton-workspace/qpid-proton-0.27.0/c/src/core/object/object.c:91
#2  0x7fbb25e333a6 in pn_connection_driver_destroy (d=0x258a628) at 
/proton-workspace/qpid-proton-0.27.0/c/src/core/connection_driver.c:93
#3  0x7fbb26056704 in pconnection_final_free (pc=0x258a080) at 
/proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:889
#4  0x7fbb2605a03c in pconnection_done (p=Unhandled dwarf expression opcode 
0xf3) at /proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:1044
#5  pn_proactor_done (p=Unhandled dwarf expression opcode 0xf3) at 
/proton-workspace/qpid-proton-0.27.0/c/src/proactor/epoll.c:2166
#6  0x7fbb26dbfc42 in proton::container::impl::thread (this=0x2588100) at 
/proton-workspace/qpid-proton-0.27.0/cpp/src/proactor_container_impl.cpp:751
#7  0x7fbb26dc0220 in proton::container::impl::run (this=0x2588100, 
threads=) at 
/proton-workspace/qpid-proton-0.27.0/cpp/src/proactor_container_impl.cpp:788
#8  0x004810ea in 
ContainerTest_rabih_Test::TestBody()::{lambda()#1}::operator()() const ()
#9  0x00482e1c in void 
std::_Bind_simple::_M_invoke<>(std::_Index_tuple<>) ()
#10 0x00482d61 in 
std::_Bind_simple::operator()() ()
#11 0x00482cde in 
std::thread::_Impl >::_M_run() ()
#12 0x7fbb26b40470 in ?? () from /usr/lib64/libstdc++.so.6
#13 0x7fbb27254851 in start_thread () from /lib64/libpthread.so.0
#14 0x7fbb2634590d in clone () from /lib64/libc.so.6#include 
#include 
#include 

#include 
#include 

class handler : public proton::messaging_handler {
public:
   handler(const std::string& u) 
   : m_url(u), m_work(0) 
   {}

   proton::work_queue* getWorkQueue() {
  if (!m_work)
 m_workProm.get_future().get();

  return m_work;
   }

   std::promise m_closeProm;

private:
   void on_connection_open(proton::connection& c)override{
   m_work = _queue();
   m_workProm.set_value();
   }

   void on_connection_close(proton::connection&)override{
  m_closeProm.set_value();
   }

   std::string m_url;
   proton::work_queue* m_work;
   std::promise m_workProm;
};

int main {
   try {
  std::string url("//127.0.0.1:5672");
  handler h(url);
  proton::container c(h);
  std::thread clientThread([&]() {
 try
 {
c.run();
 }
 catch (const std::exception& e) {
std::cerr << "Broker threw exception: " << e.what() << std::endl;
 }});

  proton::connection con = c.connect(url);
  h.getWorkQueue()->add([&]() {con.close(); });
  h.m_closeProm.get_future().get();
  con = proton::connection();
  if (clientThread.joinable())
  {
 clientThread.join();
  }
   }
   catch (std::exception& e) {
  std::cerr << e.what() << std::endl;
   }
   return 0;
}

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

Re: [Proton-C] Discovery

2019-01-24 Thread Rabih M
Hello,

Knowing that the on_transport_error will be called only when the
max_reconnect is reached, the user will have to manage one reattempt at a
time. It will become too flexible,  the user will have to write his own
reconnect strategy with out reusing what was done already the embedded
reconnect code.

We would like to reuse the native reconnect way that is implemented in
proton and be flexible in the URLs like Qpid JMS and Qpid Python.

Best regards,
Rabih


On Fri, Jan 18, 2019 at 4:58 PM Alan Conway  wrote:

> On Fri, Jan 18, 2019 at 10:35 AM Alan Conway  wrote:
>
> >
> >
> > On Thu, Jan 17, 2019 at 6:56 AM Rabih M  wrote:
> >
> >> Hello,
> >>
> >> What Olivier was proposing is more at the level of the C++ proton
> binding.
> >> What we would like to do is:
> >> Instead of taking a vector of fixed fail-over urls in the
> >> reconnect_options, we would like the reconnect_options to take an
> >> std::function that returns a URL. This function will be called by proton
> >> to
> >> get the next URL when there is failure. This will allow the clients to
> >> write there own logic to fetch the new URLs dynamically...
> >> On the qpid-jms side we have already this possibility.
> >>
> >
> > That sounds reasonable but I'd suggest an alternative that is a bit more
> > flexible, add this to proton::connection:
> >
> > // Over-ride connection options to be used the next time the connection
> > re-connects.
> > // Takes effect only if/when the connection does re-connect.
> > // Typically called in on_transport_error() to influence automatic
> > re-connect.
> > connection::reconnect_options(connection::options& overrides)
> >
> >
> BROKEN sorry - that would *replace* all your connection options, not
> override the ones you want which is not what I meant.
> This is better:
>
> // Allow updates to the connection_options used by this connection.
> // These updates only take effect if/when the connection is re-connected.
> // Typically used in on_transport_error() to change the options used for
> automatic re-connect.
> connection_options& connection::options();
>
> So now your case becomes:
>
> myhandler {
> connection_options::reconnect_opts_; // Save initial reconnect opts
>
> void on_transport_error(transport& t) {
>   reconnect_opts_.failover_urls(pick_my_urls()); // Update the URLs
>   t .connection().options().reconnect(reconnect_opts_)); // Update the
> connection's options
> }
> }
>
> >
> >
> >> We would like to know if it sounds reasonable to you before proposing a
> >> patch. WDYT?
> >>
> >> Best regards,
> >> Rabih
> >>
> >> On Thu, Jan 3, 2019 at 9:15 PM Alan Conway  wrote:
> >>
> >> > On Thu, Jan 3, 2019 at 7:12 AM Gordon Sim  wrote:
> >> >
> >> > > Are you talking specifically about something at the c level rather
> >> than
> >> > > e.g. c++?
> >> > >
> >> > > As far as I recall, the c layer has no built in support for
> >> > > reconnection, that is added by the c++ (or other) wrappers.
> >> > >
> >> > > In the c++ api, perhaps the reconnect options in use could be
> exposed
> >> > > (such that they can then be altered), or else there could be a way
> to
> >> > > provide a function that returns the next url to use rather than a
> >> static
> >> > > list (this is sort of what the python wrapper allows). That may be
> >> what
> >> > > you mean by the onReconnect callback? If so, it sounds reasonable to
> >> me,
> >> > > though it would be better to get the thoughts of those more involved
> >> > > with that component. (Alan, Cliff, Andrew?)
> >> > >
> >> > >
> >> > Just to add some detail to what Gordon said - in C there is no
> reconnect
> >> > support out-of-the-box  but you have the tools to implement any
> strategy
> >> > you like. Use the PN_TRANSPORT_CLOSED event (with pn_transport_error()
> >> set)
> >> > to react to an unexpected disconnect. You can modify the parameters
> used
> >> > for re-connect any way you like. If you re-use the existing
> >> pn_connection_t
> >> > your sessions and links will be automatically re-opened. If you don't
> >> want
> >> > that, you can  throw away the old pn_connection_t and re-connect with
> a
> >> new
> >> > one.
> >> >
> >> 

Re: [Proton-C] Discovery

2019-01-17 Thread Rabih M
Hello,

What Olivier was proposing is more at the level of the C++ proton binding.
What we would like to do is:
Instead of taking a vector of fixed fail-over urls in the
reconnect_options, we would like the reconnect_options to take an
std::function that returns a URL. This function will be called by proton to
get the next URL when there is failure. This will allow the clients to
write there own logic to fetch the new URLs dynamically...
On the qpid-jms side we have already this possibility.

We would like to know if it sounds reasonable to you before proposing a
patch. WDYT?

Best regards,
Rabih

On Thu, Jan 3, 2019 at 9:15 PM Alan Conway  wrote:

> On Thu, Jan 3, 2019 at 7:12 AM Gordon Sim  wrote:
>
> > Are you talking specifically about something at the c level rather than
> > e.g. c++?
> >
> > As far as I recall, the c layer has no built in support for
> > reconnection, that is added by the c++ (or other) wrappers.
> >
> > In the c++ api, perhaps the reconnect options in use could be exposed
> > (such that they can then be altered), or else there could be a way to
> > provide a function that returns the next url to use rather than a static
> > list (this is sort of what the python wrapper allows). That may be what
> > you mean by the onReconnect callback? If so, it sounds reasonable to me,
> > though it would be better to get the thoughts of those more involved
> > with that component. (Alan, Cliff, Andrew?)
> >
> >
> Just to add some detail to what Gordon said - in C there is no reconnect
> support out-of-the-box  but you have the tools to implement any strategy
> you like. Use the PN_TRANSPORT_CLOSED event (with pn_transport_error() set)
> to react to an unexpected disconnect. You can modify the parameters used
> for re-connect any way you like. If you re-use the existing pn_connection_t
> your sessions and links will be automatically re-opened. If you don't want
> that, you can  throw away the old pn_connection_t and re-connect with a new
> one.
>
> The C++ binding provides automatic reconnect with some built-in options,
> including a list of URLs. You can be notified of a disconnect by
> on_transport_error(), but I don't think the current API allows you to
> change the reconnect URL list at that point. If the built-in options
> don't do what you need,  you can turn off the built-in automatic reconnect
> and implement your own custom reconnect strategy in on_transport_error(),
> similar to what I described for C above.
>
>
> > On 03/01/19 10:30, VERMEULEN Olivier wrote:
> > > Hello,
> > >
> > > Any feedback on the below proposition?
> > >
> > > Thanks,
> > > Olivier
> > >
> > > From: VERMEULEN Olivier
> > > Sent: mardi 18 décembre 2018 15:01
> > > To: 'users@qpid.apache.org' 
> > > Subject: RE: [Proton-C] Discovery
> > >
> > > Hello,
> > >
> > > We looked into the proton-c implementation and didn't find anything
> that
> > would allow us to implement a qpid-jms like discovery.
> > > So I was wondering if we could add, directly in proton-c, an
> onReconnect
> > callback (or something similar) that would allow us to modify the list of
> > URLs the client tries to connect to.
> > > We need this to answer the following use case:
> > > the dispatch-router (host1:1234) on which the client was connected goes
> > down
> > > the client enters the reconnect loop (on host1:1234)
> > > we restart the dispatch-router but on another machine (host2:5678)
> > > the client reconnects -> this is currently not happening
> > > Note that we can do the pull-request but I wanted to run the
> proposition
> > by you first.
> > >
> > > Thanks,
> > > Olivier
> > >
> > > From: VERMEULEN Olivier
> > > Sent: mardi 11 décembre 2018 12:34
> > > To: users@qpid.apache.org
> > > Subject: [Proton-C] Discovery
> > >
> > > Hello,
> > >
> > > I was looking into the qpid-jms-discovery project which seems very nice
> > for what I'm trying to do: update the list of dispatch-routers the client
> > can connect to during failover (with a custom discovery logic).
> > > I wanted to know if there is something similar with proton-c or at
> least
> > a way for me to implement it?
> > >
> > > Thanks,
> > > Olivier
> > >
> > > ***
> > > This e-mail contains information for the intended recipient only. It
> may
> > contain proprietary material or confidential information. If you are not
> > the intended recipient you are not authorized to distribute, copy or use
> > this e-mail or any attachment to it. Murex cannot guarantee that it is
> > virus free and accepts no responsibility for any loss or damage arising
> > from its use. If you have received this e-mail in error please notify
> > immediately the sender and delete the original email received, any
> > attachments and all copies from your system.
> > >
> >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: 

Re: C++ imperative client API

2018-10-19 Thread Rabih M
 Hello,

In Murex, we advanced a little bit on the subject of having an imperative
API in proton.
We thought about 2 ways of doing this:

The first way is to have a wrapper over Proton C++ with an imperative style
(like JMS API over proton-J in Java or like multithreaded_client.cpp
example):
The communication between the 2 threads will be done using futures and
queues.

To establish the connection, we can have Jms or Qpid messaging like Api.
Then have synchronous and asynchronous capabilities in the sender/receiver.
For example:

class Receiver {
/* Synchronous receive with a timeout.
 * Receiver will add 1 credit on the link, and when the timeout is
reached, it asks for a drain.
 * If the parameter timeout is 0, the drain will be sent right after
the add credit (equivalent for a tryReceive)
 * If the parameter timeout is -1, we will wait indefinitely for the
answer (like JMS)
 * If the timeout is expired and a message is received in the container
thread after the exit of this method, we will release the message, so it
comes back to the peer.
 * Exceptions are thrown when the timeout is expired or when the
container thread signals an error.
 */
Message syncReceive(long timeout);

/* A batch synchronous receive with a timeout.
 * Receiver will add n credit on the link, and when the timeout is
reached, it asks for a drain.
 * If the timeout is 0, the drain will be sent right after the add
credit (equivalent for a tryReceive)
 * If the timeout is -1 we will wait indefinitely for the answer
 * If the timeout is expired and some messages are received in the
container thread after the exit of this method, we will release those
messages.
 * This method has batch performance advantages over the last one.
 * No exception is thrown, the vector will return the received
messages.
 */
std::vector batchReceive(int numberOfRequestedMessages, long
timeout);

/* Setting a listener.
 * Similar to Jms setListner
 */
void setMessageListener(MessageListener /* can be a std::function */
listener);

/* Asynchronous receive.
 * Receiver will add 1 credit on the link and we will set the value of
the future when a message or an error is received.
 * If there are no messages on the other side the future.get() will
hang.
 */
std::future receive();


/* A batch asynchronous receive.
 */
std::vector> asyncBatchReceive(int
numberOfRequestedMessages);
}

The second way is just an idea I had, I did not go much into details if it
feasible, the idea is to enhance the proton container API to become thread
safe:
Let's say the container object can return thread safe objects, for example:
it can return thread safe sender object on the open_sender() method that
internally can use the work_queue to send message. For the receiver object,
we can return a future of a message when receive() is called.


What do you think of those propositions?
If you wish, we can have some voice calls to discuss more...

Best regards,
Rabih


On Tue, Feb 13, 2018 at 8:44 PM Alan Conway  wrote:

> On Tue, Feb 13, 2018 at 8:30 AM, Rabih M  wrote:
>
> > Hello,
> >
> > As Olivier said at Murex we wrote a small C++ client with an imperative
> API
> > that wraps proton. At that time we did not know that the community had
> > plans to develop one.
> >
> > The API we wrote was inspired from JMS (connection, session, messaging
> > producer, messaging consumer...). So the whole thing is similar to
> Qpid-Jms
> > wrapping Proton-J.
> >
> > What design are you thinking about?
> >
> >
> We are thinking along similar lines, see also the old qpid::messaging API
> which is quite JMS like.
>
> The main weakness of those APIs is the difficulty of handling asynchronous
> events and flow control, and an inability to write server-like
> applications.
>
> The goal is to come up with something that can be used like JMS or
> qpid::messaging for simple cases, but  also allows a more asynchronous
> style of programming - without inversion of control. The thinking so far is
> to use queues and futures (or similar features depending on language) to
> handle async events so that you can write multi-threaded clients/servers or
> single-threaded apps that fire off a bunch of work in parallel, then react
> as things complete (rather than a synchronous send/wait; send/wait;
> send/wait pattern.)
>
> We have one imperative API so far in Go, with a good example of how the
> imperative style reduces coding for a simple broker example:
>
>
>
> https://github.com/apache/qpid-proton/tree/master/examples/go#a-tale-of-two-brokers
>
> In Go we use the native channel mechanism, which combines the features of
> futures and queues. In C++ we might be able to use std::future, but it is
> likely that we will need to add some queue

Problems building dispatch-router 1.3.0

2018-09-28 Thread Rabih M
Hello,

We are trying to build the latest released version of the dispatch router
1.3.0.
We are using redhat OS (2.6.32-358.el6.x86_64 GNU/Linux), GCC 4.9.2 and Python
2.7.8.

And we have 2 failing unit tests:

system_tests_two_routers:



[linux] 30: FAIL: test_10_propagated_disposition
(system_tests_two_routers.TwoRouterTest)

[linux] 30: 

[linux] 30: Traceback (most recent call last):

[linux] 30:   File
".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_tests_two_routers.py",
line 191, in test_10_propagated_disposition

[linux] 30: test.run()

[linux] 30:   File
".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_tests_two_routers.py",
line 1229, in run

[linux] 30: self.test.assertEqual(['accept', 'reject'],
sorted(self.settled))

[linux] 30: AssertionError: Lists differ: [u'accept', u'reject'] !=
[u'reject']

[linux] 30:

[linux] 30: First differing element 0:

[linux] 30: accept

[linux] 30: reject

[linux] 30:

[linux] 30: First list contains 1 additional elements.

[linux] 30: First extra element 1:

[linux] 30: reject

[linux] 30:

[linux] 30: - [u'accept', u'reject']

[linux] 30: + [u'reject']

[linux] 30:


And system_tests_console :



Test command: /opt/rh/python27/root/usr/bin/python
".../dispatch-workspace/build-dir/qpid-dispatch/tests/run.py" "-x"
"unit2" "-v" "system_tests_console"

[linux] 48: Test timeout computed to be: 1500

[linux] 48: ERROR

[linux] 48:

[linux] 48: =

[linux] 48: ERROR: setUpClass (system_tests_console.ConsoleTest)

[linux] 48: -

[linux] 48: Traceback (most recent call last):

[linux] 48:   File
"/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_tests_console.py",
line 45, in setUpClass

[linux] 48: cls.router = cls.tester.qdrouterd('test-router', config)

[linux] 48:   File
".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
line 557, in qdrouterd

[linux] 48: return self.cleanup(Qdrouterd(*args, **kwargs))

[linux] 48:   File
".../dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
line 352, in __init__

[linux] 48: self.wait_ready()

[linux] 48:   File
"/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
line 478, in wait_ready

[linux] 48: self.wait_ports(**retry_kwargs)

[linux] 48:   File
"/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
line 463, in wait_ports

[linux] 48: wait_ports(self.ports_family, **retry_kwargs)

[linux] 48:   File
"/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
line 185, in wait_ports

[linux] 48: wait_port(port=port, protocol_family=protocol_family,
**retry_kwargs)

[linux] 48:   File
"/data/jenkins-slave/home/workspace/proton-acceptance/dispatch-workspace/qpid-dispatch-1.3.0/tests/system_test.py",
line 177, in wait_port

[linux] 48: raise Exception("wait_port timeout on host %s port %s:
%s"%(host, port, e))

[linux] 48: Exception: wait_port timeout on host 127.0.0.1 port 27703:
[Errno 111] Connection refused


Any idea why this is happening?

Best regards,
Rabih


[proton-c] windows is taking too long to detect invalid port

2018-06-18 Thread Rabih M
Hello,

If you try to connect using an invalid server port on windows, the proton
client blocks for 2 seconds. On linux we do not have this problem.

Do you have any idea why?

Best regards,
Rabih

OS: Windows 7, Compiler: MSVC 2013 Version 12 Update 5, Proton: 0.22.0

Sample code:

class hello_world : public proton::messaging_handler {
   void on_container_start(proton::container& c) override{
  c.connect("localhost:"); // no server is started on this port
   }
};


int main()
{
   hello_world hw;

   std::chrono::steady_clock::time_point begin =
std::chrono::steady_clock::now();
   try {
  proton::container(hw).run();
   }
   catch (const std::exception& ) {}
   std::chrono::steady_clock::time_point end =
std::chrono::steady_clock::now();

   std::cout << "Time difference = " <<
std::chrono::duration_cast(end - begin).count()
<< std::endl;

   return 0;
}


Re: C++ imperative client API

2018-02-13 Thread Rabih M
Hello,

As Olivier said at Murex we wrote a small C++ client with an imperative API
that wraps proton. At that time we did not know that the community had
plans to develop one.

The API we wrote was inspired from JMS (connection, session, messaging
producer, messaging consumer...). So the whole thing is similar to Qpid-Jms
wrapping Proton-J.

What design are you thinking about?

We would like to unify our effort with the community to have a working
C++ imperative
client.

Best regards,
Rabih

On Tue, Feb 6, 2018 at 5:54 PM, VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:

> Hello Justin.
>
> Good to hear.
> Maybe we could also send you what we did on our side.
> It's still pretty basic but it can be a starting point.
>
> Olivier
>
> -Original Message-
> From: Justin Ross [mailto:justin.r...@gmail.com]
> Sent: jeudi 1 février 2018 14:37
> To: users@qpid.apache.org
> Subject: Re: C++ imperative client API
>
> Hi, Olivier.  It's still in prototype and research mode.  We don't have
> anything like a design proposal yet.  I hope we'll have more time to work
> on it soon.
>
> When we have some concrete design points and sample code for evaluation,
> we'll share it here so we can have an open design discussion.
>
> On Tue, Jan 30, 2018 at 12:29 AM, VERMEULEN Olivier <
> olivier.vermeu...@murex.com> wrote:
>
> > Hello,
> >
> > We already discussed with some of you our need @Murex for a C++
> > imperative client API.
> > I remember Justin saying that you were making progress on this subject
> > but I can't seem to find any information about it.
> > Could you give us a quick status on this? Note that we would also be
> > interested in contributing to the development of this new client.
> >
> > Thanks,
> > Olivier
> > ***
> >
> > This e-mail contains information for the intended recipient only. It
> > may contain proprietary material or confidential information. If you
> > are not the intended recipient you are not authorised to distribute,
> > copy or use this e-mail or any attachment to it. Murex cannot
> > guarantee that it is virus free and accepts no responsibility for any
> > loss or damage arising from its use. If you have received this e-mail
> > in error please notify immediately the sender and delete the original
> > email received, any attachments and all copies from your system.
> >
> ***
>
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorised to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Qpid java broker is unable to reconnect to database after restart

2018-01-05 Thread Rabih M
Thank you Alex for the fast reply.
Here is the link to the Jira issue:
https://issues.apache.org/jira/browse/QPID-8070

Best regards,
Rabih

On Fri, Jan 5, 2018 at 4:36 PM, Oleksandr Rudyy <oru...@gmail.com> wrote:

> Hi Rabih,
> It is a bug in Qpid JDBC message store implementation. Please raise a
> JIRA and we will try to fix it in 6.1.6/6.0.9/7.0.1.
>
> You can try to use BDB or Derby message store instead of JDBC one to
> work around the issue.
>
> Kind Regards,
> Alex
>
> On 5 January 2018 at 15:13, Rabih M <rabih.prom...@gmail.com> wrote:
> > Hello,
> >
> > We are using Qpid java broker version 6.1.4.
> >
> > Our test case:
> > We have a JMS client sending messages continuously in autoAck mode to a
> > messaging cluster. The cluster is composed of 1 dispatch router and 2
> java
> > brokers which are connect to an Oracle Db.
> > At some point, the test kills a broker using kill -9 and then attempts to
> > restart it.
> > Randomly, we have a NullPointerException while the broker is restarting
> and
> > connecting to the virtual host node (the full error stack is at the end
> of
> > the email).
> >
> > Do you have any idea why this is happening? is there a chance that a kill
> > -9 on a broker leaves the Db in an unstable state?
> >
> > Best regards,
> > Rabih
> >
> >
> > 2018-01-03 17:16:45,930 INFO  [VirtualHostNode-default-Config]
> > (q.m.t.recovery_start) - [Broker]
> > [vh(/default)/ms(GenericJDBCMessageStore)] TXN-1004 : Recovery Start
> > 2018-01-03 17:16:45,938 WARN  [VirtualHostNode-default-Config]
> > (o.a.q.s.v.SynchronousMessageStoreRecoverer) - Message id 1 in log
> > references queue with id 0b2a06aa-6d46-49aa-885f-63f3cd73108d which is
> not
> > in the configuration, entry will be discarded
> > 2018-01-03 17:16:45,946 ERROR [VirtualHostNode-default-Config]
> > (o.a.q.s.m.AbstractConfiguredObject) - Failed to open object with name
> > 'default'.  Object will be put into ERROR state.
> > java.lang.NullPointerException: null
> > at
> > org.apache.qpid.server.store.AbstractJDBCMessageStore.commitTranAsync(
> AbstractJDBCMessageStore.java:826)
> > at
> > org.apache.qpid.server.store.AbstractJDBCMessageStore.access$600(
> AbstractJDBCMessageStore.java:59)
> > at
> > org.apache.qpid.server.store.AbstractJDBCMessageStore$JDBCTransaction.
> commitTranAsync(AbstractJDBCMessageStore.java:1205)
> > at
> > org.apache.qpid.server.store.jdbc.GenericAbstractJDBCMessageStor
> e$RecordedJDBCTransaction.commitTranAsync(GenericAbstractJDBCMessageStor
> e.java:142)
> > at
> > org.apache.qpid.server.virtualhost.SynchronousMessageStoreRecover
> er$MessageInstanceVisitor.handle(SynchronousMessageStoreRecover
> er.java:219)
> > at
> > org.apache.qpid.server.store.AbstractJDBCMessageStore$
> JDBCMessageStoreReader.visitMessageInstances(
> AbstractJDBCMessageStore.java:1911)
> > at
> > org.apache.qpid.server.virtualhost.SynchronousMessageStoreRecover
> er.recover(SynchronousMessageStoreRecoverer.java:82)
> > at
> > org.apache.qpid.server.virtualhost.AbstractVirtualHost.
> postCreateDefaultExchangeTasks(AbstractVirtualHost.java:2581)
> > at
> > org.apache.qpid.server.virtualhost.AbstractVirtualHost.onActivate(
> AbstractVirtualHost.java:2563)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> > sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> > at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> > org.apache.qpid.server.model.AbstractConfiguredObject.attainState(
> AbstractConfiguredObject.java:1482)
> > at
> > org.apache.qpid.server.model.AbstractConfiguredObject.attainState(
> AbstractConfiguredObject.java:1461)
> > at
> > org.apache.qpid.server.model.AbstractConfiguredObject$8.onSuccess(
> AbstractConfiguredObject.java:1035)
> > at
> > org.apache.qpid.server.model.AbstractConfiguredObject$8.onSuccess(
> AbstractConfiguredObject.java:1029)
> > at
> > org.apache.qpid.server.model.AbstractConfiguredObject$22$1.
> run(AbstractConfiguredObject.java:2609)
> > at
> > org.apache.qpid.server.model.AbstractConfiguredObject$22$1.
> run(AbstractConfiguredObject.java:2605)
> > at java.security.AccessController.doPrivileged(Native Method)
> >   

Qpid java broker is unable to reconnect to database after restart

2018-01-05 Thread Rabih M
Hello,

We are using Qpid java broker version 6.1.4.

Our test case:
We have a JMS client sending messages continuously in autoAck mode to a
messaging cluster. The cluster is composed of 1 dispatch router and 2 java
brokers which are connect to an Oracle Db.
At some point, the test kills a broker using kill -9 and then attempts to
restart it.
Randomly, we have a NullPointerException while the broker is restarting and
connecting to the virtual host node (the full error stack is at the end of
the email).

Do you have any idea why this is happening? is there a chance that a kill
-9 on a broker leaves the Db in an unstable state?

Best regards,
Rabih


2018-01-03 17:16:45,930 INFO  [VirtualHostNode-default-Config]
(q.m.t.recovery_start) - [Broker]
[vh(/default)/ms(GenericJDBCMessageStore)] TXN-1004 : Recovery Start
2018-01-03 17:16:45,938 WARN  [VirtualHostNode-default-Config]
(o.a.q.s.v.SynchronousMessageStoreRecoverer) - Message id 1 in log
references queue with id 0b2a06aa-6d46-49aa-885f-63f3cd73108d which is not
in the configuration, entry will be discarded
2018-01-03 17:16:45,946 ERROR [VirtualHostNode-default-Config]
(o.a.q.s.m.AbstractConfiguredObject) - Failed to open object with name
'default'.  Object will be put into ERROR state.
java.lang.NullPointerException: null
at
org.apache.qpid.server.store.AbstractJDBCMessageStore.commitTranAsync(AbstractJDBCMessageStore.java:826)
at
org.apache.qpid.server.store.AbstractJDBCMessageStore.access$600(AbstractJDBCMessageStore.java:59)
at
org.apache.qpid.server.store.AbstractJDBCMessageStore$JDBCTransaction.commitTranAsync(AbstractJDBCMessageStore.java:1205)
at
org.apache.qpid.server.store.jdbc.GenericAbstractJDBCMessageStore$RecordedJDBCTransaction.commitTranAsync(GenericAbstractJDBCMessageStore.java:142)
at
org.apache.qpid.server.virtualhost.SynchronousMessageStoreRecoverer$MessageInstanceVisitor.handle(SynchronousMessageStoreRecoverer.java:219)
at
org.apache.qpid.server.store.AbstractJDBCMessageStore$JDBCMessageStoreReader.visitMessageInstances(AbstractJDBCMessageStore.java:1911)
at
org.apache.qpid.server.virtualhost.SynchronousMessageStoreRecoverer.recover(SynchronousMessageStoreRecoverer.java:82)
at
org.apache.qpid.server.virtualhost.AbstractVirtualHost.postCreateDefaultExchangeTasks(AbstractVirtualHost.java:2581)
at
org.apache.qpid.server.virtualhost.AbstractVirtualHost.onActivate(AbstractVirtualHost.java:2563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.qpid.server.model.AbstractConfiguredObject.attainState(AbstractConfiguredObject.java:1482)
at
org.apache.qpid.server.model.AbstractConfiguredObject.attainState(AbstractConfiguredObject.java:1461)
at
org.apache.qpid.server.model.AbstractConfiguredObject$8.onSuccess(AbstractConfiguredObject.java:1035)
at
org.apache.qpid.server.model.AbstractConfiguredObject$8.onSuccess(AbstractConfiguredObject.java:1029)
at
org.apache.qpid.server.model.AbstractConfiguredObject$22$1.run(AbstractConfiguredObject.java:2609)
at
org.apache.qpid.server.model.AbstractConfiguredObject$22$1.run(AbstractConfiguredObject.java:2605)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at
org.apache.qpid.server.model.AbstractConfiguredObject$22.onSuccess(AbstractConfiguredObject.java:2604)
at
com.google.common.util.concurrent.Futures$6.run(Futures.java:1319)
at
org.apache.qpid.server.configuration.updater.TaskExecutorImpl$ImmediateIfSameThreadExecutor.execute(TaskExecutorImpl.java:404)
at
org.apache.qpid.server.configuration.updater.TaskExecutorImpl.execute(TaskExecutorImpl.java:187)
at
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
at
com.google.common.util.concurrent.ExecutionList.add(ExecutionList.java:101)
at
com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:170)
at
com.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at
org.apache.qpid.server.model.AbstractConfiguredObject.addFutureCallback(AbstractConfiguredObject.java:2599)
at
org.apache.qpid.server.model.AbstractConfiguredObject.doAttainState(AbstractConfiguredObject.java:1028)
at
org.apache.qpid.server.model.AbstractConfiguredObject.access$300(AbstractConfiguredObject.java:91)
at
org.apache.qpid.server.model.AbstractConfiguredObject$1.execute(AbstractConfiguredObject.java:584)
at
org.apache.qpid.server.model.AbstractConfiguredObject$1.execute(AbstractConfiguredObject.java:571)
at

Re: [proton C++ 0.16.0] idle_timeout on windows can't be less than 1s

2017-04-19 Thread Rabih M
Hello Cliff,

Here is the jira issue: https://issues.apache.org/jira/browse/PROTON-1464

Thanks,
Rabih

On Thu, Apr 13, 2017 at 10:16 PM, Cliff Jansen <cliffjan...@gmail.com>
wrote:

> This should work on Windows too.
>
> Could you please raise a JIRA?  Thanks.
>
> Cliff
>
> On 4/12/17, Rabih M <rabih.prom...@gmail.com> wrote:
> > Hello,
> >
> > I noticed that windows does not take into consideration the configured
> > connection idle_timeout if it is less than 1 second. Linux does not have
> > this problem.
> >
> > I wrote some code to reproduce at the end of the mail.
> >
> > Is it a bug ?
> >
> > Best regards,
> > Rabih
> >
> >
> > #include 
> > #include 
> > #include 
> > #include 
> > #include 
> >
> > class hello_world : public proton::messaging_handler {
> > public:
> >void on_container_start(proton::container& c) {
> >c.connect("localhost:77",
> > proton::connection_options().idle_timeout(proton::duration(10)));
> //takes
> > ~1000ms
> >// or c.connect("host:77",
> > proton::connection_options().idle_timeout(proton::duration(10)));
> //takes ~
> > 2500ms
> >}
> > };
> >
> > int main() {
> >LARGE_INTEGER frequency;
> >LARGE_INTEGER t1, t2;
> >QueryPerformanceFrequency();
> >QueryPerformanceCounter();
> >
> >try {
> >   hello_world hw;
> >   proton::default_container(hw).run();
> >   return 0;
> >}
> >catch (const std::exception& e) {
> >   std::cerr << e.what() << std::endl;
> >}
> >
> >QueryPerformanceCounter();
> >double elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 /
> > frequency.QuadPart;
> >std::cout << "elapsed: " << elapsedTime << std::endl;
> >
> >return 1;
> > }
> >
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


[proton C++ 0.16.0] idle_timeout on windows can't be less than 1s

2017-04-12 Thread Rabih M
Hello,

I noticed that windows does not take into consideration the configured
connection idle_timeout if it is less than 1 second. Linux does not have
this problem.

I wrote some code to reproduce at the end of the mail.

Is it a bug ?

Best regards,
Rabih


#include 
#include 
#include 
#include 
#include 

class hello_world : public proton::messaging_handler {
public:
   void on_container_start(proton::container& c) {
   c.connect("localhost:77",
proton::connection_options().idle_timeout(proton::duration(10))); //takes
~1000ms
   // or c.connect("host:77",
proton::connection_options().idle_timeout(proton::duration(10))); //takes ~
2500ms
   }
};

int main() {
   LARGE_INTEGER frequency;
   LARGE_INTEGER t1, t2;
   QueryPerformanceFrequency();
   QueryPerformanceCounter();

   try {
  hello_world hw;
  proton::default_container(hw).run();
  return 0;
   }
   catch (const std::exception& e) {
  std::cerr << e.what() << std::endl;
   }

   QueryPerformanceCounter();
   double elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 /
frequency.QuadPart;
   std::cout << "elapsed: " << elapsedTime << std::endl;

   return 1;
}


[Proton C++ 0.16.0] Reconnect and reconnect timer

2017-03-17 Thread Rabih M
Hello,

Bit of context:
I am trying to implement a connection retry mechanism using proton. A bit
similar to the failover Url in JMS...

I discovered that we can set "reconnect" option in the
connection_options.reconnect(reconnect_timer).

There are no clear documentation, how does this option behaves?
Does it try to reconnect only at connection start? and does it try to
reconnect in the middle of an Amqp communication (after sending/receiving
some messages)?

On the other side, how to know that the max retries or dead line timeout is
reached?

Best regards,
Rabih


[Qpid-Proton C++] detecting only connection errors

2017-03-14 Thread Rabih M
Hello,

I am trying to implement a retry mechanism using proton. If the connection
between the amqp peers is down or the remote peer itself is down, i would
like to retry the current operation (send, receive...). A bit similar to
the failover Url in JMS...

Therefore i would like to capture only the errors related to connection
problems to trigger the retry.
If I implement on_transport_error(), will I get only the connection error
here? or will I receive other types of undesired errors too?

Best regards,
Rabih


Re: [Qpid Proton] How mandatory is Cyrus for SASL

2017-02-08 Thread Rabih M
Hello Alan,

Thank you for your reply.
https://issues.apache.org/jira/browse/DISPATCH-631

Best regards,
Rabih

On Tue, Feb 7, 2017 at 8:56 PM, Alan Conway <acon...@redhat.com> wrote:

> On Tue, 2017-02-07 at 19:32 +0100, Rabih M wrote:
> > Hello,
> >
> > I compiled qpid-proton 0.16.0 with "-DSASL_IMPL=none", but i have an
> > 3 of qpid-dispatch 0.7.0 unit tests that are failing:
> >
> > system_tests_qdstat
> > system_tests_sasl_plain
> > system_tests_deprecated
> >
> > I attached the tests output.
> >
> > Do you have any idea, where should i look? Is Cyrus mandatory for the
> > qpid-dispatch 0.7.0?
>
> Not mandatory, the tests should probably be made conditional to skip
> tests for SASL features that are not available. Raise a JIRA for that.
>
> > Best regards,
> > Rabih
> >
> > On Thu, Feb 2, 2017 at 10:18 PM, Andrew Stitcher <astitc...@redhat.co
> > m> wrote:
> > > On Thu, 2017-02-02 at 19:54 +, Adel Boutros wrote:
> > > > In our case, we only use EXTERNAL or ANONYMOUS so it should be
> > > > enough.
> > > >
> > > >
> > > > Is there a reason it is called "null"?
> > >
> > > It's actually the "none" implementation! As in, no other sasl
> > > library
> > > found.
> > >
> > > You can make sure it is used by adding "-DSASL_IMPL=none" to the
> > > cmake
> > > command line. Or use ccmake to set SASL_IMPL interactively.
> > >
> > > Andrew
> > >
> > >
> > > -
> > > 
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > For additional commands, e-mail: users-h...@qpid.apache.org
> > >
> > >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: [Qpid java broker 7.0.0] When will it be released?

2017-02-08 Thread Rabih M
Hello Keith,

Yes, it is very good news for us.

Thank you,
Rabih

On Wed, Feb 8, 2017 at 10:53 AM, Keith W <keith.w...@gmail.com> wrote:

> Hi Rabih,
>
> We've been having some further discussions and have decided to alter
> course.   We now plan to put out QPID-7558 as a defect release on the
> 6.1 line.  This will be 6.1.2.  I'd hope to to have the release public
> before the end of the month.  Would this be useful to you?  Please let
> me know.
>
> Kind regards,
>
> Keith.
>
>
>
>
>
> On 2 February 2017 at 17:34, Keith W <keith.w...@gmail.com> wrote:
> > Hi Rabih
> >
> > Whilst the initial work to backport this one patch would be trivial,
> > unfortunately the overhead of maintaining the extra code line is not.
> >  The team we have working at "Qpid Broker for Java/0-x client" is
> > completely focused on the v7.0 work right now (mainly AMQP 1.0
> > implementation improvements), and I don't have the spare capacity to
> > produce a 6.2.0.   The overhead for a single enhancement is simple too
> > much.
> >
> > I will attach a patch to QPID-7558 (first thing tomorrow) that will
> > apply cleanly to 6.1.x, allowing you to simply checkout/build your
> > own.
> >
> > I will keep this under review.  If v7.0 slips from its estimated
> > delivery date, or it makes sense to release other features early in a
> > v6.2, QPID-7558 will be included too.
> >
> > Hope this makes sense to you. Kind regards, Keith.
> >
> >
> > On 2 February 2017 at 13:43, Rabih M <rabih.prom...@gmail.com> wrote:
> >> Hello,
> >>
> >> Thank you for your answers.
> >> We would appreciate if we could have minor version including this
> feature
> >> (if it is easy to back-port of course) because we already a use case to
> >> implement that requires this feature.
> >>
> >> Best regards,
> >> Rabih
> >>
> >> On Wed, Feb 1, 2017 at 9:18 PM, Rob Godfrey <rob.j.godf...@gmail.com>
> wrote:
> >>
> >>> On 1 February 2017 at 18:37, Robbie Gemmell <robbie.gemm...@gmail.com>
> >>> wrote:
> >>>
> >>> > If it was considered for back port then it seems a 6.2.0 would be
> >>> > better if sticking to a strict semver format, though I wouldn't
> overly
> >>> > mind if it was just included in a 6.1.2 with any intended bugfixes
> >>> > either if folks really didn't want to have a 6.2.x a couple months
> >>> > before doing 7.0.0, though I don't see a huge issue with that
> >>> > personally. I also don't think it would be unreasonable to tell
> anyone
> >>> > that since such a 6.2.x series would be so similar to 6.1.x, renamed
> >>> > only on semver grounds for a fairly isolated feature many folks wont
> >>> > touch, that there won't be any more 6.1.x releases to cut down on the
> >>> > overhead of maintaining two almost identical release branches.
> >>> >
> >>> > I guess the main thing is really how much work it is to back port it,
> >>> > which I can't say I know.
> >>> >
> >>>
> >>> I think the work to backport from trunk to a branch from 6.1.x should
> be
> >>> pretty trivial.  In terms of versioning, I'm probably more "liberal" in
> >>> what I would allow in a 6.1.2 release than others would be, though
> given
> >>> that this essentially adds (in a compatible manner) to the (REST) API
> >>> around the JDBC virtualhost(node), I can see why we would want this to
> be a
> >>> 6.2.
> >>>
> >>> Fundamentally using semantic versioning is supposed to help the user
> base,
> >>> if by using it we hold off on releasing a feature that is useful to our
> >>> userbase because it means that we worry about having too many
> versions, it
> >>> feels like we are doing something wrong.
> >>>
> >>> (I hate myself for suggesting this, but one *horrible* way of avoiding
> a
> >>> 6.2 while still adding the feature to 6.x is, I guess, simply not
> making
> >>> the attribute "managed" in 6.1.x, just using the context variable for
> the
> >>> prefix... so it can be changed by setting the context variable, but
> there
> >>> is no change in the API)
> >>>
> >>> -- Rob
> >>>
> >>>
> >>> >
> >>> > Robbie
> >>> >
> >>> > On 1 February 2017 

Re: [Qpid Proton] How mandatory is Cyrus for SASL

2017-02-07 Thread Rabih M
Hello,

I compiled qpid-proton 0.16.0 with "-DSASL_IMPL=none", but i have an 3 of
qpid-dispatch 0.7.0 unit tests that are failing:

system_tests_qdstat

system_tests_sasl_plain

system_tests_deprecated


I attached the tests output.

Do you have any idea, where should i look? Is Cyrus mandatory for the
qpid-dispatch 0.7.0?

Best regards,
Rabih

On Thu, Feb 2, 2017 at 10:18 PM, Andrew Stitcher 
wrote:

> On Thu, 2017-02-02 at 19:54 +, Adel Boutros wrote:
> > In our case, we only use EXTERNAL or ANONYMOUS so it should be
> > enough.
> >
> >
> > Is there a reason it is called "null"?
>
> It's actually the "none" implementation! As in, no other sasl library
> found.
>
> You can make sure it is used by adding "-DSASL_IMPL=none" to the cmake
> command line. Or use ccmake to set SASL_IMPL interactively.
>
> Andrew
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>
FAIL: test_ssl_cert_to_auth_fail_no_sasl_external 
(system_tests_qdstat.QdstatSslNoExternalTest)
--
Traceback (most recent call last):
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_tests_qdstat.py",
 line 325, in test_ssl_cert_to_auth_fail_no_sasl_external
self.ssl_test_bad('auth_s', ['client_cert_all'])
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_tests_qdstat.py",
 line 322, in ssl_test_bad
self.assertRaises(AssertionError, self.ssl_test, url_name, arg_names)
AssertionError: AssertionError not raised

==
FAIL: test_inter_router_plain_exists 
(system_tests_sasl_plain.RouterTestPlainSasl)
The setUpClass sets up two routers with SASL PLAIN enabled.
--
Traceback (most recent call last):
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_tests_sasl_plain.py",
 line 121, in test_inter_router_plain_exists
self.assertIn("inter-router", out)
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_test.py",
 line 674, in assertIn
assert item in items, "%s not in %s" % (item, items)
AssertionError: inter-router not in Connections
  Id  host container roledir  
security authentication
  
=
  13  127.0.0.1:43271  09eaefc9-7250-4a3b-a9d0-42c58e48a194  normal  in   
no-security  anonymous-user


==
FAIL: test_qdstat_connect_sasl (system_tests_sasl_plain.RouterTestPlainSasl)
--
Traceback (most recent call last):
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_tests_sasl_plain.py",
 line 135, in test_qdstat_connect_sasl
"qdstat exit status %s, output:\n%s" % (p.returncode, out)
AssertionError: qdstat exit status 1, output:
ConnectionException: Connection amqp://0.0.0.0:24677/$management disconnected: 
Condition('amqp:unauthorized-access', 'Authentication failed [mech=NULL]')


==
FAIL: test_qdstat_connect_sasl_password_file 
(system_tests_sasl_plain.RouterTestPlainSasl)
--
Traceback (most recent call last):
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_tests_sasl_plain.py",
 line 163, in test_qdstat_connect_sasl_password_file
"qdstat exit status %s, output:\n%s" % (p.returncode, out)
AssertionError: qdstat exit status 1, output:
ConnectionException: Connection amqp://0.0.0.0:24677/$management disconnected: 
Condition('amqp:unauthorized-access', 'Authentication failed [mech=NULL]')


==
FAIL: test_aaa_qdstat_connect_sasl_over_ssl 
(system_tests_sasl_plain.RouterTestPlainSaslOverSsl)
--
Traceback (most recent call last):
  File 
"/data/jenkins-slave/home/workspace/qpid-dispatch-router/label/linux/qpid-dispatch-0.7.0/tests/system_tests_sasl_plain.py",
 line 263, in test_aaa_qdstat_connect_sasl_over_ssl
"qdstat exit status %s, output:\n%s" % (p.returncode, out)
AssertionError: qdstat exit status 1, output:
ConnectionException: Connection amqps://0.0.0.0:24681/$management disconnected: 
Condition('amqp:unauthorized-access', 'Authentication failed [mech=NULL]')



Re: [Qpid java broker 7.0.0] When will it be released?

2017-02-02 Thread Rabih M
Hello,

Thank you for your answers.
We would appreciate if we could have minor version including this feature
(if it is easy to back-port of course) because we already a use case to
implement that requires this feature.

Best regards,
Rabih

On Wed, Feb 1, 2017 at 9:18 PM, Rob Godfrey <rob.j.godf...@gmail.com> wrote:

> On 1 February 2017 at 18:37, Robbie Gemmell <robbie.gemm...@gmail.com>
> wrote:
>
> > If it was considered for back port then it seems a 6.2.0 would be
> > better if sticking to a strict semver format, though I wouldn't overly
> > mind if it was just included in a 6.1.2 with any intended bugfixes
> > either if folks really didn't want to have a 6.2.x a couple months
> > before doing 7.0.0, though I don't see a huge issue with that
> > personally. I also don't think it would be unreasonable to tell anyone
> > that since such a 6.2.x series would be so similar to 6.1.x, renamed
> > only on semver grounds for a fairly isolated feature many folks wont
> > touch, that there won't be any more 6.1.x releases to cut down on the
> > overhead of maintaining two almost identical release branches.
> >
> > I guess the main thing is really how much work it is to back port it,
> > which I can't say I know.
> >
>
> I think the work to backport from trunk to a branch from 6.1.x should be
> pretty trivial.  In terms of versioning, I'm probably more "liberal" in
> what I would allow in a 6.1.2 release than others would be, though given
> that this essentially adds (in a compatible manner) to the (REST) API
> around the JDBC virtualhost(node), I can see why we would want this to be a
> 6.2.
>
> Fundamentally using semantic versioning is supposed to help the user base,
> if by using it we hold off on releasing a feature that is useful to our
> userbase because it means that we worry about having too many versions, it
> feels like we are doing something wrong.
>
> (I hate myself for suggesting this, but one *horrible* way of avoiding a
> 6.2 while still adding the feature to 6.x is, I guess, simply not making
> the attribute "managed" in 6.1.x, just using the context variable for the
> prefix... so it can be changed by setting the context variable, but there
> is no change in the API)
>
> -- Rob
>
>
> >
> > Robbie
> >
> > On 1 February 2017 at 15:17, Lorenz Quack <quack.lor...@gmail.com>
> wrote:
> > > Hello Rabih,
> > >
> > > Unfortunately, the v7 release is still a couple of months away.
> > > Out of curiosity, what is your time-line for when you would like this
> > > feature to land?
> > >
> > > We were considering back porting this but there are currently no plans
> > for a
> > > 6.2.0 release and as a new feature this is not really fit for a bug fix
> > > release (i.e. 6.1.2).
> > > Our limited resources are currently focused on v7 but it does involve a
> > fair
> > > amount of work.
> > >
> > > Sorry that this is probably not he answer you were hoping for.
> > >
> > > Kind regards,
> > > Lorenz
> > >
> > >
> > >
> > > On 01/02/17 14:35, Rabih M wrote:
> > >>
> > >> Hello,
> > >>
> > >> The question is in the title.
> > >> I am asking because we are interested in
> > >> https://issues.apache.org/jira/browse/QPID-7558
> > >>
> > >> Best regards,
> > >> Rabih
> > >>
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > > For additional commands, e-mail: users-h...@qpid.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>


[Qpid java broker 7.0.0] When will it be released?

2017-02-01 Thread Rabih M
Hello,

The question is in the title.
I am asking because we are interested in
https://issues.apache.org/jira/browse/QPID-7558

Best regards,
Rabih


[QPID java broker] Does it support Amqp management 1.0?

2017-01-30 Thread Rabih M
Hello,

Does the java qpid broker supports Amqp management 1.0 ?

If not, are there any plans to do it?


Best regards,
Rabih


Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms

2017-01-24 Thread Rabih M
Hello Robbie,

Thanks for you for your answer.
I will try the solutions you proposed.

I need just need one clarification. Concerning the Jms, I already tried to
use a temporary queue but it didn't work. May be i am using it wrong. Here
is what i did:

...
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue("$management");
MessageProducer producer = session.createProducer(queue);

Queue tempDest = session.createTemporaryQueue();
MessageConsumer responseConsumer = session.createConsumer(tempDest);
connection.start();

ObjectMessage objectMessage = session.createObjectMessage();
objectMessage.setJMSReplyTo(tempDest);
//fill the rest of the parameters ...
producer.send(objectMessage);
ObjectMessage replyMessage = (ObjectMessage) responseConsumer.receive(1000);
...

I was getting an exception on the .receive() from the dispatch router
saying that the reply-to address is bad...
Should i create the temporary queue and the consumer from a different
session then the producer?

Best regards,
Rabih


On Tue, Jan 24, 2017 at 5:37 PM, Robbie Gemmell <robbie.gemm...@gmail.com>
wrote:

> The session.createTemporaryQueue(); method is how JMS creates
> temporary destinations for use in reply-to scenarios. The dispatch
> tools will be doing essentially the same thing under the covers for
> theirs.
>
> Both work by creating a 'dynamic' link attachment, with the server
> returning an address for the dynamic attach to the client. One key
> difference is with JMS you create the temporary destination (for which
> the client uses a dynamic sender link to create), then separately
> create a receiver using it (and in turn the address associated with
> it), whereas non-JMS client/tools will typically just create a single
> dynamic receiver link and use its address for the reply-to.
>
> For proton-j, there aren't any examples for this, but you you
> essentially would change the link opening to not specify an address,
> and instead set the source(for a receiver) or target (for a sender) to
> be 'dynamic'. You would then inspect the remoteSource or remoteTarget
> once the link had been reemotely attached, and get its
> (server-generated) address, then use this as your reply-to.
>
> Robbie
>
> On 23 January 2017 at 11:00, Rabih M <rabih.prom...@gmail.com> wrote:
> > Hello,
> >
> > When I added to the JMS test the ".setJMSReplyTo()" queue, like you
> > suggested, it worked.
> > But I cheated a little to make it work because I did not find a way in
> JMS
> > to create a local queue. So I created a queue on the qpid-dispatch and
> read
> > the result from it. You can check the attached code.
> >
> > Is there any way using qpid-jms to create a local queue on the client
> side
> > to serve as a reply to address?
> >
> > For the proton-j test, I have the same question as above. I saw that it
> can
> > be done with the C++ API. Are there any examples for proton-j or
> something
> > that can help me?
> >
> > I saw may be in proton we don't need a local queue, may be we can get the
> > address of the consumer directly and pass it in reply-to field. Is it
> > correct?
> >
> > Thanks,
> > Rabih
> >
> >
> >
> > On Fri, Jan 20, 2017 at 10:18 PM, Rob Godfrey <rob.j.godf...@gmail.com>
> > wrote:
> >>
> >> On 20 January 2017 at 21:45, Ganesh Murthy <gmur...@redhat.com> wrote:
> >>
> >> >
> >> >
> >> > - Original Message -
> >> > > From: "Robbie Gemmell" <robbie.gemm...@gmail.com>
> >> > > To: users@qpid.apache.org
> >> > > Sent: Friday, January 20, 2017 2:18:45 PM
> >> > > Subject: Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms
> >> > >
> >> > > On 20 January 2017 at 19:06, Gordon Sim <g...@redhat.com> wrote:
> >> > > > On 20/01/17 18:40, Rabih M wrote:
> >> > > >>
> >> > > >> I inserted the map directly into the ObjectMessage like you told
> me
> >> > > >> Robbie and it worked.
> >> > > >>
> >> > > >> But like the proton-j case, the connector is not being created on
> >> > > >> the
> >> > > >> Qpid-dispatch side.
> >> > > >> I attached the amqp communication into this mail.
> >> > > >
> >> > > >
> >> > > > The last frame in that file is incomplete (the previous error
> >> > regarding the
> >> > > > map body may or may not be a wireshark issue),but that l

Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms

2017-01-23 Thread Rabih M
Hello,

When I added to the JMS test the ".setJMSReplyTo()" queue, like you
suggested, it worked.
But I cheated a little to make it work because I did not find a way in JMS
to create a local queue. So I created a queue on the qpid-dispatch and read
the result from it. You can check the attached code.

Is there any way using qpid-jms to create a local queue on the client side
to serve as a reply to address?

For the proton-j test, I have the same question as above. I saw that it can
be done with the C++ API. Are there any examples for proton-j or something
that can help me?

I saw may be in proton we don't need a local queue, may be we can get the
address of the consumer directly and pass it in reply-to field. Is it
correct?

Thanks,
Rabih



On Fri, Jan 20, 2017 at 10:18 PM, Rob Godfrey <rob.j.godf...@gmail.com>
wrote:

> On 20 January 2017 at 21:45, Ganesh Murthy <gmur...@redhat.com> wrote:
>
> >
> >
> > - Original Message -
> > > From: "Robbie Gemmell" <robbie.gemm...@gmail.com>
> > > To: users@qpid.apache.org
> > > Sent: Friday, January 20, 2017 2:18:45 PM
> > > Subject: Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms
> > >
> > > On 20 January 2017 at 19:06, Gordon Sim <g...@redhat.com> wrote:
> > > > On 20/01/17 18:40, Rabih M wrote:
> > > >>
> > > >> I inserted the map directly into the ObjectMessage like you told me
> > > >> Robbie and it worked.
> > > >>
> > > >> But like the proton-j case, the connector is not being created on
> the
> > > >> Qpid-dispatch side.
> > > >> I attached the amqp communication into this mail.
> > > >
> > > >
> > > > The last frame in that file is incomplete (the previous error
> > regarding the
> > > > map body may or may not be a wireshark issue),but that last frame is
> > likely
> > > > the response which would indicate the success or otherwise of the
> > frame. Is
> > > > there anything logged by the router?
> > > >
> > > > If you can, just running the router with PN_TRACE_FRM=1 is simpler
> for
> > this
> > > > sort of thing in my view.
> > > >
> > >
> > > As Gordon suggests, the proton trace logs might be more helpful. You
> > > can also do that with proton-j, and this the JMS client too since it
> > > uses proton-j.
> > >
> > > One key difference from the qdmanage case is you are not setting a
> > > reply-to value or correlation-id to use along with it. I don't know if
> > > the router cares about not being able to send a response or not
> > > though, but it might.
> >
> > You could add the following to your dispatch router conf file
> > (qdrouterd.conf) and restart the router (This will enable trace logging)
> >
> > log {
> > module: DEFAULT
> > enable: trace+
> > output: /path/to/log/qdrouterd.log
> > }
> >
> > When you run your management request, you will see proton frame trace and
> > you will see additional logging out of the dispatch Python agent -
> > something like the following -
> >
> > Fri Jan 20 15:27:23 2017 AGENT (debug) Agent request
> Message(address=None,
> > properties={'operation': 'CREATE', 'type': 'org.apache.qpid.dispatch.
> connector',
> > 'name': 'rabih.connector'}, body={'port': '5673', 'role':
> > 'route-container', 'addr': 'broker-machine', 'name': 'rabih.connector'},
> > reply_to='amqp:/_topo/0/Router.A/temp.i4H_ZOvee1xhGxx',
> correlation_id=1L)
> > Fri Jan 20 15:27:23 2017 CONN_MGR (info) Configured Connector:
> > broker-machine:5673 proto=any, role=route-container
> > Fri Jan 20 15:27:23 2017 AGENT (debug) Add entity:
> > ConnectorEntity(addr=broker-machine, allowRedirect=True, cost=1,
> > host=127.0.0.1, identity=connector/127.0.0.1:5673:rabih.connector,
> > idleTimeoutSeconds=16, maxFrameSize=16384, maxSessions=32768,
> > name=rabih.connector, port=5673, role=route-container,
> > stripAnnotations=both, type=org.apache.qpid.dispatch.connector,
> > verifyHostName=True)
> >
> > The Python management agent will not process your request if there is no
> > reply_to as seen here -
> >
> > https://github.com/apache/qpid-dispatch/blob/master/
> > python/qpid_dispatch_internal/management/agent.py#L822
> >
> >
> >
> Can I ask why the agent doesn't process the request?  Just because there is
> a reply-to doesn't mean that the sender will be around to actually hear t

Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms

2017-01-20 Thread Rabih M
I inserted the map directly into the ObjectMessage like you told me Robbie
and it worked.

But like the proton-j case, the connector is not being created on the
Qpid-dispatch side.
I attached the amqp communication into this mail.

Best regards,
Rabih

On Fri, Jan 20, 2017 at 7:24 PM, Rabih M <rabih.prom...@gmail.com> wrote:

> Thanks Robbie,
>
> For the JMS, i will try to do that and i will keep you posted for the
> result.
>
> Concerning Proton-j:
> I configured the Target.
> But the connector is still not created on the qpid-dispatch side...
>
> I compared the sent packets using wireshark but i don't think there are
> significant differences.
> I attached the amqp exchanges in both cases.
>
> Do you think i have some missing info in my proton-j message?
>
> Thanks,
> Rabih
>
> On Fri, Jan 20, 2017 at 7:09 PM, Robbie Gemmell <robbie.gemm...@gmail.com>
> wrote:
>
>> On 20 January 2017 at 17:29, Rabih M <rabih.prom...@gmail.com> wrote:
>> > Hello,
>> >
>> > Thank you for you answers.
>> >
>> > I am trying to experiment to see if i can do it with the info you gave
>> me.
>> >
>> > I took a simple example to start: qdmanage -b :5672
>> create
>> > --type=connector role=route-container addr=broker-machine port=5673
>> > name=rabih.connector
>> > My goal is to imitate the behavior using JMS and Proton-j.
>> >
>> > First, I tried with JMS (attached the code):
>> > I used an ObjectMessage with JMS_AMQP_TYPED_ENCODING set to true. And I
>> used
>> > the proton-j encoder to encode the map before sending it.
>> > But, I am getting a MessageFormatException("Failed to serialize object")
>> >
>> > Do you have any idea how can i pass the byte array without failing?
>> >
>>
>> I'd be rather surprised if this is actually what you want to do. I'd
>> expect you to set the map directly on the JMS ObjectMessage, and then
>> thanks to the special flag the client will encode it itself as an
>> amqp-value section containing a map (which now that ive kept reading,
>> matches what you are attempting with proton-j).
>>
>> >
>> > Second, I tried with Proton-j (attached code):
>> > The code i wrote is based on send example in proton-j.
>> > But the problem is that i am not able to find how to set the destination
>> > queue of the message. (i used Message.setAddress() but it is wrong)
>> >
>> > How can i set a destination queue for the sender or message?
>>
>> To give the link an addres you need to set the link 'source' and
>> 'target' and use the latter to indicate the address for the sending
>> link before you open it. E.g for address "queue", the most basic
>> config would be:
>>
>> +Target target = new Target();
>> +target.setAddress("queue");
>> +snd.setTarget(target );
>> +snd.setSource(new Source());
>>
>>
>> >
>> > PS. I used wireshark to see the amqp messages that are passing on the
>> > network and copy the message parameters.
>> >
>> >
>> > Best Regards,
>> > Rabih
>> >
>> > On Wed, Jan 18, 2017 at 2:35 PM, Rob Godfrey <rob.j.godf...@gmail.com>
>> > wrote:
>> >>
>> >> So I think what we said - and I can't find it written down anywhere in
>> the
>> >> draft, though we reference the JSON spec in the pre-amble, is that any
>> >> value in the headers or body should be able to be sent in the native
>> AMQP
>> >> type (and we might need some words there about converting between
>> various
>> >> numeric types), or as a JSON serialized string.  We didn't (to my
>> >> recollection) talk about whether there should be a way for the
>> requester
>> >> to
>> >> be able to influence the form of the reply.
>> >>
>> >> Currently the implementation of AMQP Management in the Qpid Broker for
>> >> Java
>> >> follows the above conventions (any inbound value can be in the native
>> >> type,
>> >> or as a JSON string which can convert to the desired type, however
>> there
>> >> is
>> >> no mechanism for controlling the nature of responses).
>> >>
>> >> Perhaps this is something we should talk about soon ;-) ?
>> >>
>> >> -- Rob
>> >>
>> >> On 18 January 2017 at 14:29, Ted Ross <tr...@redhat.com> wrote:
>> >>
>> &g

Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms

2017-01-20 Thread Rabih M
Thanks Robbie,

For the JMS, i will try to do that and i will keep you posted for the
result.

Concerning Proton-j:
I configured the Target.
But the connector is still not created on the qpid-dispatch side...

I compared the sent packets using wireshark but i don't think there are
significant differences.
I attached the amqp exchanges in both cases.

Do you think i have some missing info in my proton-j message?

Thanks,
Rabih

On Fri, Jan 20, 2017 at 7:09 PM, Robbie Gemmell <robbie.gemm...@gmail.com>
wrote:

> On 20 January 2017 at 17:29, Rabih M <rabih.prom...@gmail.com> wrote:
> > Hello,
> >
> > Thank you for you answers.
> >
> > I am trying to experiment to see if i can do it with the info you gave
> me.
> >
> > I took a simple example to start: qdmanage -b :5672
> create
> > --type=connector role=route-container addr=broker-machine port=5673
> > name=rabih.connector
> > My goal is to imitate the behavior using JMS and Proton-j.
> >
> > First, I tried with JMS (attached the code):
> > I used an ObjectMessage with JMS_AMQP_TYPED_ENCODING set to true. And I
> used
> > the proton-j encoder to encode the map before sending it.
> > But, I am getting a MessageFormatException("Failed to serialize object")
> >
> > Do you have any idea how can i pass the byte array without failing?
> >
>
> I'd be rather surprised if this is actually what you want to do. I'd
> expect you to set the map directly on the JMS ObjectMessage, and then
> thanks to the special flag the client will encode it itself as an
> amqp-value section containing a map (which now that ive kept reading,
> matches what you are attempting with proton-j).
>
> >
> > Second, I tried with Proton-j (attached code):
> > The code i wrote is based on send example in proton-j.
> > But the problem is that i am not able to find how to set the destination
> > queue of the message. (i used Message.setAddress() but it is wrong)
> >
> > How can i set a destination queue for the sender or message?
>
> To give the link an addres you need to set the link 'source' and
> 'target' and use the latter to indicate the address for the sending
> link before you open it. E.g for address "queue", the most basic
> config would be:
>
> +Target target = new Target();
> +target.setAddress("queue");
> +snd.setTarget(target );
> +snd.setSource(new Source());
>
>
> >
> > PS. I used wireshark to see the amqp messages that are passing on the
> > network and copy the message parameters.
> >
> >
> > Best Regards,
> > Rabih
> >
> > On Wed, Jan 18, 2017 at 2:35 PM, Rob Godfrey <rob.j.godf...@gmail.com>
> > wrote:
> >>
> >> So I think what we said - and I can't find it written down anywhere in
> the
> >> draft, though we reference the JSON spec in the pre-amble, is that any
> >> value in the headers or body should be able to be sent in the native
> AMQP
> >> type (and we might need some words there about converting between
> various
> >> numeric types), or as a JSON serialized string.  We didn't (to my
> >> recollection) talk about whether there should be a way for the requester
> >> to
> >> be able to influence the form of the reply.
> >>
> >> Currently the implementation of AMQP Management in the Qpid Broker for
> >> Java
> >> follows the above conventions (any inbound value can be in the native
> >> type,
> >> or as a JSON string which can convert to the desired type, however there
> >> is
> >> no mechanism for controlling the nature of responses).
> >>
> >> Perhaps this is something we should talk about soon ;-) ?
> >>
> >> -- Rob
> >>
> >> On 18 January 2017 at 14:29, Ted Ross <tr...@redhat.com> wrote:
> >>
> >> >
> >> >
> >> > On 01/18/2017 07:45 AM, Gordon Sim wrote:
> >> >
> >> >> On 18/01/17 10:45, Rob Godfrey wrote:
> >> >>
> >> >>> In terms of sending maps/lists I think we said (at OASIS), though it
> >> >>> is
> >> >>> possibly not yet in the draft spec, that Json formatted equivalents
> >> >>> should
> >> >>> be able to be used for all values... however I have no idea if the
> >> >>> Dispatch
> >> >>> Router supports that.
> >> >>>
> >> >>
> >> >> I think that would be very sensible.
> >> >>
> >> &g

Re: [Qpid Dispatch] Manage Dispatch router from Qpid Jms

2017-01-20 Thread Rabih M
Hello,

Thank you for you answers.

I am trying to experiment to see if i can do it with the info you gave me.

I took a simple example to start: qdmanage -b :5672
create --type=connector role=route-container addr=broker-machine port=5673
name=rabih.connector
My goal is to imitate the behavior using JMS and Proton-j.

First, I tried with JMS (attached the code):
I used an ObjectMessage with JMS_AMQP_TYPED_ENCODING set to true. And I
used the proton-j encoder to encode the map before sending it.
But, I am getting a MessageFormatException("Failed to serialize object")

Do you have any idea how can i pass the byte array without failing?


Second, I tried with Proton-j (attached code):
The code i wrote is based on send example in proton-j.
But the problem is that i am not able to find how to set the destination
queue of the message. (i used Message.setAddress() but it is wrong)

How can i set a destination queue for the sender or message?

PS. I used wireshark to see the amqp messages that are passing on the
network and copy the message parameters.


Best Regards,
Rabih

On Wed, Jan 18, 2017 at 2:35 PM, Rob Godfrey 
wrote:

> So I think what we said - and I can't find it written down anywhere in the
> draft, though we reference the JSON spec in the pre-amble, is that any
> value in the headers or body should be able to be sent in the native AMQP
> type (and we might need some words there about converting between various
> numeric types), or as a JSON serialized string.  We didn't (to my
> recollection) talk about whether there should be a way for the requester to
> be able to influence the form of the reply.
>
> Currently the implementation of AMQP Management in the Qpid Broker for Java
> follows the above conventions (any inbound value can be in the native type,
> or as a JSON string which can convert to the desired type, however there is
> no mechanism for controlling the nature of responses).
>
> Perhaps this is something we should talk about soon ;-) ?
>
> -- Rob
>
> On 18 January 2017 at 14:29, Ted Ross  wrote:
>
> >
> >
> > On 01/18/2017 07:45 AM, Gordon Sim wrote:
> >
> >> On 18/01/17 10:45, Rob Godfrey wrote:
> >>
> >>> In terms of sending maps/lists I think we said (at OASIS), though it is
> >>> possibly not yet in the draft spec, that Json formatted equivalents
> >>> should
> >>> be able to be used for all values... however I have no idea if the
> >>> Dispatch
> >>> Router supports that.
> >>>
> >>
> >> I think that would be very sensible.
> >>
> >
> > Dispatch Router does not support Json formatted bodies at present, but
> > this is a feature that I would be in favor of putting on the roadmap.
> >
> >
> >
> >>
> >> -
> >> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >> For additional commands, e-mail: users-h...@qpid.apache.org
> >>
> >>
> > -
> > To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> > For additional commands, e-mail: users-h...@qpid.apache.org
> >
> >
>
import java.nio.ByteBuffer;

import java.util.HashMap;
import java.util.Map;

import javax.jms.*;

import org.apache.qpid.jms.JmsConnectionFactory;
import org.apache.qpid.proton.codec.*;


public class JMSTest {
public static void main(String[] args) throws JMSException {
ConnectionFactory connectionFactory = new 
JmsConnectionFactory("amqp://:");
Connection connection = connectionFactory.createConnection();
Session session = connection.createSession(false, 
Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue("$management");
MessageProducer producer = session.createProducer(queue);
ObjectMessage objectMessage = session.createObjectMessage();
objectMessage.setBooleanProperty("JMS_AMQP_TYPED_ENCODING", true);
objectMessage.setStringProperty("operation", "CREATE");
objectMessage.setStringProperty("type", 
"org.apache.qpid.dispatch.connector");
objectMessage.setStringProperty("name", "rabih.connector");
Map map = new HashMap();
map.put("name", "rabih.connector");
map.put("role", "route-container");
map.put("addr", "brokerMachine");
map.put("port", "port");
EncoderImpl encoder = new EncoderImpl(new DecoderImpl());
ByteBuffer byteBuffer = ByteBuffer.allocate();
WritableBuffer byteBufferWrapper = new 
WritableBuffer.ByteBufferWrapper(byteBuffer);
encoder.setByteBuffer(byteBufferWrapper);
encoder.writeMap(map);
System.out.println(new String(byteBuffer.array()));
objectMessage.setObject(byteBuffer.array());
producer.send(objectMessage);
connection.close();
}
}
import java.io.IOException;

import java.nio.BufferOverflowException;

import java.util.HashMap;
import java.util.Map;

import org.apache.qpid.proton.Proton;

[Qpid Dispatch] Manage Dispatch router from Qpid Jms

2017-01-17 Thread Rabih M
Hello,

If I understood well, Qpid-Dispatch can be managed using Amqp messages.
I saw that the management code is based on proton python binding.

Do you think it is feasible to manage the dispatch router the same way from
Qpid-Jms api?
Was there any work done in this direction before? Will it be too
complicated to do it?

Thanks,
Rabih


Re: [Dispatcher] system_tests_protocol_family test fails if no ipv6 is supported

2016-06-06 Thread Rabih M
Apparently the mail server does not accept  ".patch" attachments...
Attaching again in ".txt"

rabih

On Mon, Jun 6, 2016 at 6:43 PM, Rabih M <rabih.prom...@gmail.com> wrote:

> Hello,
>
> I made a small patch to fix some minor bugs in the dispatcher unit-tests
> concerning ipv6. (patch attached)
> But i have one remaining unit-test that fails "system_tests_policy". This
> is because tests/policy-1/policy-safari.json
> and tests/policy-1/policy-boardwalk.json have has "::1" in the localhost
> field.
> I can't find a way to tell the test to ignore this value if the machine is
> in ipv4 mode and to keep it in case we have both...
>
> Anybody has an idea?
>
> Thanks,
> Rabih
>
> On Fri, Jun 3, 2016 at 8:14 PM, Ganesh Murthy <gmur...@redhat.com> wrote:
>
>> Hi Rabih,
>>We actually looked into that. Chuck Rolke already raised a JIRA for
>> that - https://issues.apache.org/jira/browse/DISPATCH-216
>>
>> We thought we could rely on Python's socket.has_ipv6 to turn off the test
>> but it turned out that socket.has_ipv6 is a constant (please read comments
>> in the above JIRA).
>>
>> I have not found a way in Python or in cmake to accurately find out if a
>> machine has IPv6 turned off.
>>
>> I would love to hear if anybody has any ideas.
>>
>> Thanks.
>>
>> - Original Message -
>> > From: "Rabih M" <rabih.prom...@gmail.com>
>> > To: users@qpid.apache.org
>> > Sent: Friday, June 3, 2016 1:09:52 PM
>> > Subject: [Dispatcher] system_tests_protocol_family test fails if no
>> ipv6 is supported
>> >
>> > Hello,
>> >
>> > Is there any way to make system_tests_protocol_family test pass if the
>> > machine does not support ipv6? is there a way cmake can detect that?
>> >
>> > Thanks,
>> > R
>> >
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
>> For additional commands, e-mail: users-h...@qpid.apache.org
>>
>>
>
diff --git a/python/qpid_dispatch_internal/policy/policy_local.py 
b/python/qpid_dispatch_internal/policy/policy_local.py
index cb62896..73eb8c5 100644
--- a/python/qpid_dispatch_internal/policy/policy_local.py
+++ b/python/qpid_dispatch_internal/policy/policy_local.py
@@ -730,7 +730,10 @@ class PolicyLocal(object):
 Test function to load a policy.
 @return:
 """
-ruleset_str = '["policyAccessRuleset", {"applicationName": 
"photoserver","maxConnections": 50,"maxConnPerUser": 5,"maxConnPerHost": 
20,"userGroups": {"anonymous":   "anonymous","users":   "u1, 
u2","paidsubscribers": "p1, p2","test":"zeke, ynot","admin":
   "alice, bob","superuser":   "ellen"},"ingressHostGroups": {"Ten18": 
"10.18.0.0-10.18.255.255","EllensWS":  "72.135.2.9","TheLabs":   
"10.48.0.0-10.48.255.255, 192.168.100.0-192.168.100.255","localhost": 
"127.0.0.1, ::1","TheWorld":  "*"},"ingressPolicies": {"anonymous":   
"TheWorld","users":   "TheWorld","paidsubscribers": "TheWorld","test":  
  "TheLabs","admin":   "Ten18, TheLabs, localhost","superuser": 
  "EllensWS, localhost"},"connectionAllowDefault": true,'
+if HostAddr.has_ipv6():
+ruleset_str = '["policyAccessRuleset", {"applicationName": 
"photoserver","maxConnections": 50,"maxConnPerUser": 5,"maxConnPerHost": 
20,"userGroups": {"anonymous":   "anonymous","users":   "u1, 
u2","paidsubscribers": "p1, p2","test":"zeke, ynot","admin":
   "alice, bob","superuser":   "ellen"},"ingressHostGroups": {"Ten18": 
"10.18.0.0-10.18.255.255","EllensWS":  "72.135.2.9","TheLabs":   
"10.48.0.0-10.48.255.255, 192.168.100.0-192.168.100.255","localhost": 
"127.0.0.1, ::1","TheWorld":  "*"},"ingressPolicies": {"anonymous":   
"TheWorld","users":   "TheWorld",

Re: [Dispatcher] system_tests_protocol_family test fails if no ipv6 is supported

2016-06-06 Thread Rabih M
Hello,

I made a small patch to fix some minor bugs in the dispatcher unit-tests
concerning ipv6. (patch attached)
But i have one remaining unit-test that fails "system_tests_policy". This
is because tests/policy-1/policy-safari.json
and tests/policy-1/policy-boardwalk.json have has "::1" in the localhost
field.
I can't find a way to tell the test to ignore this value if the machine is
in ipv4 mode and to keep it in case we have both...

Anybody has an idea?

Thanks,
Rabih

On Fri, Jun 3, 2016 at 8:14 PM, Ganesh Murthy <gmur...@redhat.com> wrote:

> Hi Rabih,
>We actually looked into that. Chuck Rolke already raised a JIRA for
> that - https://issues.apache.org/jira/browse/DISPATCH-216
>
> We thought we could rely on Python's socket.has_ipv6 to turn off the test
> but it turned out that socket.has_ipv6 is a constant (please read comments
> in the above JIRA).
>
> I have not found a way in Python or in cmake to accurately find out if a
> machine has IPv6 turned off.
>
> I would love to hear if anybody has any ideas.
>
> Thanks.
>
> - Original Message -
> > From: "Rabih M" <rabih.prom...@gmail.com>
> > To: users@qpid.apache.org
> > Sent: Friday, June 3, 2016 1:09:52 PM
> > Subject: [Dispatcher] system_tests_protocol_family test fails if no ipv6
> is supported
> >
> > Hello,
> >
> > Is there any way to make system_tests_protocol_family test pass if the
> > machine does not support ipv6? is there a way cmake can detect that?
> >
> > Thanks,
> > R
> >
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org

[Dispatcher] system_tests_protocol_family test fails if no ipv6 is supported

2016-06-03 Thread Rabih M
Hello,

Is there any way to make system_tests_protocol_family test pass if the
machine does not support ipv6? is there a way cmake can detect that?

Thanks,
R


[proton-c 0.12.0] SASL unit test are failing

2016-06-01 Thread Rabih M
Hello,

I tried to launch proton 0.12.0 unit tests on Linux, everything is working
fine except when i launch SASL related unit tests in python-test.

The following 2 tests are failing when SASL is activated but are passing
when SASL is deactivated:

proton_tests.engine.SaslEventTest.testPipelinedServerReadFirst:
File
"/data/src/new/proton-ssl/qpid-proton/tests/python/proton_tests/engine.py",
line 2679, in testPipelinedServerReadFirst
assert server.sasl().outcome == SASL.OK
AssertionError

proton_tests.engine.SaslEventTest.testPipelinedServerWriteFirst:
File
"/data/src/new/proton-ssl/qpid-proton/tests/python/proton_tests/engine.py",
line 2288, in expect_oneof
assert False, "actual events %s did not match any of the expected
sequences: %s" % (events, sequences)
AssertionError: actual events [PN_CONNECTION_INIT(), PN_CONNECTION_BOUND(), PN_TRANSPORT_ERROR(), PN_TRANSPORT_TAIL_CLOSED(), PN_TRANSPORT_HEAD_CLOSED(), PN_TRANSPORT_CLOSED()] did not match any of the expected sequences:
((PN_CONNECTION_INIT, PN_CONNECTION_BOUND, PN_TRANSPORT),)

The following test is only launched when SASL is activated and is blocking:

test_authentication_via_container_attributes:
2016-06-01 12:11:18,077 ERROR amqp:unauthorized-access: Authentication
failed [mech=(null)]
2016-06-01 12:11:18,078 ERROR amqp:connection:framing-error: connection
aborted

Has anyone already encountered those failures?

Thanks,
Rabih


Qpid C++ 0.34 unit tests are failing with visual studio 2013

2016-05-18 Thread Rabih M
Hello,

I compiled the Qpid C++ code on windows using visual studio 2013.
I have some unit tests failures, I tried to debug but it is not obvious to
find the root cause.

In qpid-client-test: The client is not able to connect to the qpid broker.
This the output I am getting:
...
test 1
Start 1: qpid-client-test

1: Test command: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
D:/qpid-cpp-0.34/qpid-cpp-0.34/qpid/cpp/src/tests/run_test.ps1 -buildDir
D:/qpid-cpp-0.34/qpid-cpp-0.34/qpid/cpp/build-dir -startBroker --
D:/qpid-cpp-0.34/qpid-cpp-0.34/qpid/cpp/build-dir/src/tests/qpid-client-test.exe
1: Test timeout computed to be: 1500
1: 2016-05-18 16:46:39 [Broker] notice Broker (pid=10372) start-up
1: 2016-05-18 16:46:39 [Broker] notice SASL disabled: No Authentication
Performed
1: 2016-05-18 16:46:39 [Security] notice Listening for SSL connections on
TCP port 5671
1: 2016-05-18 16:46:39 [Network] notice Listening on TCP/TCP6 port 21077
1: 2016-05-18 16:46:40 [Network] warning Connect failed: No connection
could be made because the target machine actively refused it. :
localhost:21077
(D:\qpid-cpp-0.34\qpid-cpp-0.34\qpid\cpp\src\qpid\sys\windows\WinSocket.cpp:164)
1: Connection  closed
1: WARNING: suspicious log entries in
\nD:\qpid-cpp-0.34\qpid-cpp-0.34\qpid\cpp\build-dir\src\tests\qpid-client-test.log:1:2016-05-18
16:46:40 [Network] warning Connect failed: No connection could be made
because the target machine actively refused it. : localhost:21077
(D:\qpid-cpp-0.34\qpid-cpp-0.34\qpid\cpp\src\qpid\sys\windows\WinSocket.cpp:164)
1: 2016-05-18 16:46:40 [Broker] notice Broker (pid=10372) shut-down
1/1 Test #1: qpid-client-test .***Failed2.15 sec

Did anyone encountered this error before?
The firewall on the machine is off. And I verified using netstat that the
port 21077 was open and listening.


Thanks,
Rabih


Re: Proton C++ question

2016-05-17 Thread Rabih M
Thank you Gordon for your reply.

I tried to add the link.close() in addition to connection.close() but the
problem remains.

The workaround with the credit window works.

May be I should create a Jira issue for the bug.

Thanks,
Rabih

On Tue, May 17, 2016 at 10:59 AM, Gordon Sim <g...@redhat.com> wrote:

> On 10/05/16 11:38, Rabih M wrote:
>
>> Hello,
>>
>>
>>
>> I have an issue while using proton C++ binding 0.12.0. In my use case, I
>> need to get one and only one message from a distant queue.
>>
>> But I am not able to do it with proton. I read that in the constructor of
>> handler I could specify that using prefetch parameter, but it is not
>> working.
>>
>
> I'm not an expert on the c++ binding, but the prefetch is a window that is
> automatically moved. So even if the window is only 1 message wide, once
> that message is handled the library will automatically issue credit to move
> the window forward.
>
>
> That said, it is reasonable to expect that closing the connection as you
> do on receiving the first message would prevent credit being reissued. Have
> you tried closing the link as well? Does that make a difference? If not I'd
> suggest this is a bug (the library should not issue extra credit on a link
> that the application has requested be closed).
>
> As a workaround, you should be able to disable the credit window be
> setting it to 0, then use the receiver::flow() method to add a single
> message credit.
>
> Btw, I believe this has changed on the master (and therefore will be
> different in the next release). It seems the prefetch is now called
> credit_window, and is set through the link_options when creating the
> receiver. Also the flow() method is now add_credit.
>
> Again, I repeat I am not an expert on the c++ binding, so apologies if any
> of the above is incorrect. Hopefully one of the experts will then step in
> and correct me :-)
>
> Example:
>>
>> Let’s consider I have a queue on the network that contains 2 messages.
>>
>> Here is my handler’s implementation:
>>
>> class SimpleReceiver : public proton::handler {
>>
>>private:
>>
>>  proton::url url;
>>
>>
>>
>>public:
>>
>>  SimpleReceiver(const proton::url& u) : *handler(1)*, url(u) {}
>>
>>
>>
>>  void on_start(proton::event ) {
>>
>>  proton::connection conn = e.container().connect(url);
>>
>>  conn.open_receiver(url.path());
>>
>>  }
>>
>>
>>
>>  void on_message(proton::event ) {
>>
>>  std::cout << e.message().body() << std::endl;
>>
>>  e.connection().close();
>>
>>  }
>>
>> };
>>
>>
>>
>> When I run this code the queue is emptied and the client took the 2
>> messages from the queue.
>>
>>
>>
>> Could you help me understand why I am getting this behavior?
>>
>>
>>
>> Thank you,
>>
>> Rabih
>>
>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


Re: Proton C++ question

2016-05-13 Thread Rabih M
Hello again,

Did anyone had time to check my question?

Am I sending to the correct mailing list?

Thanks,
Rabih

On Tue, May 10, 2016 at 12:38 PM, Rabih M <rabih.prom...@gmail.com> wrote:

> Hello,
>
>
>
> I have an issue while using proton C++ binding 0.12.0. In my use case, I
> need to get one and only one message from a distant queue.
>
> But I am not able to do it with proton. I read that in the constructor of
> handler I could specify that using prefetch parameter, but it is not
> working.
>
>
>
> Example:
>
> Let’s consider I have a queue on the network that contains 2 messages.
>
> Here is my handler’s implementation:
>
>
>
> class SimpleReceiver : public proton::handler {
>
>   private:
>
> proton::url url;
>
>
>
>   public:
>
> SimpleReceiver(const proton::url& u) : *handler(1)*, url(u) {}
>
>
>
> void on_start(proton::event ) {
>
> proton::connection conn = e.container().connect(url);
>
> conn.open_receiver(url.path());
>
> }
>
>
>
> void on_message(proton::event ) {
>
> std::cout << e.message().body() << std::endl;
>
> e.connection().close();
>
> }
>
> };
>
>
>
> When I run this code the queue is emptied and the client took the 2
> messages from the queue.
>
>
>
> Could you help me understand why I am getting this behavior?
>
>
>
> Thank you,
>
> Rabih
>


Proton C++ question

2016-05-10 Thread Rabih M
Hello,



I have an issue while using proton C++ binding 0.12.0. In my use case, I
need to get one and only one message from a distant queue.

But I am not able to do it with proton. I read that in the constructor of
handler I could specify that using prefetch parameter, but it is not
working.



Example:

Let’s consider I have a queue on the network that contains 2 messages.

Here is my handler’s implementation:



class SimpleReceiver : public proton::handler {

  private:

proton::url url;



  public:

SimpleReceiver(const proton::url& u) : *handler(1)*, url(u) {}



void on_start(proton::event ) {

proton::connection conn = e.container().connect(url);

conn.open_receiver(url.path());

}



void on_message(proton::event ) {

std::cout << e.message().body() << std::endl;

e.connection().close();

}

};



When I run this code the queue is emptied and the client took the 2
messages from the queue.



Could you help me understand why I am getting this behavior?



Thank you,

Rabih