fter sending SIGTERM, the SIGKILL signal is
> sent. I raised
> QPID-7910 [1] to improve stop script. The possible improvements are
> listed in the JIRA description.
>
> Kind Regards,
> Alex
>
>
> [1] https://issues.apache.org/jira/browse/QPID-7910
>
>
> On 18 S
Hello,
Following up on Adel's email.
I took a look at the initiateShutdown endpoint you mentioned.
I tested it and it seems to work but I don't see it in any documentation, not
even in the broker apidocs.
Is there a reason for that? Is this feature officially supported?
And one more question,
Hello,
We played a bit with the JMS selector lately and we noticed that when you
specify an invalid (wrong syntax) x-filter-jms-selector binding argument, the
creation of the binding still succeeds. Is that expected?
All incoming messages are then discarded from the given queue so that's ok but
Hello,
According to the apidocs/latest/exchange documentation, it's possible to remove
all bindings for a given destination (here a queue) using the 'unbind'
operation.
But when testing it, it only works when specifying both the destination and the
bindingKey.
Is it something that should be
ueue managed operation
>> 'getPublishingLinks' and call unbind for every binding returned.
>>
>> Are you looking for an operation to delete all bindings for a given
>> destination?
>>
>>
>> Kind Regards,
>> Alex
>>
>>
>> On 15 May 20
+1
- Run Murex test suite with Broker-J 7.0.4, Dispatch-Router 0.7.0 and Proton
0.16.0
-Original Message-
From: Keith W
Sent: mercredi 30 mai 2018 15:45
To: users@qpid.apache.org
Subject: Re: [VOTE] Release Apache Qpid Broker-J 7.0.4
+1.
My testing was:
1) Verified the md5/sha
Hello,
I was testing the overflow policy introduced in the Broker-J 7.
With REJECT policy, the message is indeed discarded if the queue is already
full but I was actually expecting an exception on client side and didn't
receive anything.
Is that expected? Did I miss something?
Thanks,
Olivier
rver.virtualhostnode.AbstractStandardVirtualHostNode.
> >> You can follow an implementation of JSON Virtual Host Node
> >> (org.apache.qpid.server.virtualhostnode.JsonVirtualHostNodeImpl).
> >>
> >
> >
> > I'm not sure I understand how any of this relates to
into it.
Kind Regards,
Alex
On 26 June 2018 at 12:33, VERMEULEN Olivier wrote:
> Hello,
>
> I have a problem with the 'JDBC' configuration store when using Oracle.
> In this mode the Broker creates a table named
> 'QPID_CONFIGURED_OBJECT_HIERARCHY'.
> The problem is that this name is alr
Hello,
While upgrading from Broker-J 6.1.4 to 7.0.3 I realized that there is a new
plugin now: amqp-1-0-jdbc-store
I can't seem to find any documentation on what this plugin is supposed to do.
I'm asking this because I have a problem of table name size with one of the
tables coming from this
the queue
UUIDs) in a virtualHostInitialConfiguration provided to the VirtualHostNode on
creation help I wonder...
-- Rob
On Fri, 22 Jun 2018 at 12:14, VERMEULEN Olivier
wrote:
> Hello Rob,
>
> The main problem is to keep my version of the config in sync with the
> one of
Hello,
I have a problem with the 'JDBC' configuration store when using Oracle.
In this mode the Broker creates a table named
'QPID_CONFIGURED_OBJECT_HIERARCHY'.
The problem is that this name is already 32 characters long while Oracle
(before version 12c R2) has a limit of 30 characters.
Would
link store is
essentially equivalent to pre-7.0 behaviour; so if you don't need durable links
to be stored persistently then should be able to omit the amqp-1-0-jdbc-store
plugin.
Hope this helps,
Rob
On Mon, 25 Jun 2018 at 09:53, VERMEULEN Olivier
wrote:
> Hello,
>
> While upgrading
r; so if you don't need durable
> links to be stored persistently then should be able to omit the
> amqp-1-0-jdbc-store plugin.
>
> Hope this helps,
> Rob
>
> On Mon, 25 Jun 2018 at 09:53, VERMEULEN Olivier
>
> wrote:
>
> > Hello,
> >
> > While upgradin
Hello,
I've been playing with the non-destructive consumers fro the past few days.
First I will explain the use case that works.
If I create a queue on a Broker-J (7.0.3) with ensureNonDestructiveConsumers
set to true and I put a message in it, then any consumer I create for this
queue will
Hello,
Indeed it would work but ideally I'd like to be stateless thus the reset of the
statistics on the broker side.
Or even better could these metrics be published directly by the broker ReST API?
Olivier V
-Original Message-
From: Olivier Mallassi [mailto:olivier.malla...@gmail.com]
Hello,
We already discussed with some of you our need @Murex for a C++ imperative
client API.
I remember Justin saying that you were making progress on this subject but I
can't seem to find any information about it.
Could you give us a quick status on this? Note that we would also be interested
Hello,
When the Java broker fails to connect to the database I get an error in the
logs but the broker keeps starting normally and is considered as "ready " in
the end...
You can check the log file attached. I reproduced the problem by specifying a
wrong derby database in the configuration.
not have plans to
include the work for the JIRA into upcoming releases. Please, fill free to
contribute the patch for it to speed up the inclusion of improvement into new
releases.
Kind Regards,
Alex
[1] https://issues.apache.org/jira/browse/QPID-7972
On 6 February 2018 at 11:01, VERMEULEN
Hello,
I was running some tests on a Java Broker with 'Memory' configuration store and
'DERBY' message store.
Is there a way with this configuration to recover the messages stored in DB
upon a broker restart?
Because it looks like the messages are mapped to the queue UUID and that they
are
are you experiencing by using a non-Memory configuration store in your
setup?
-- Rob
On Fri, 22 Jun 2018 at 10:47, VERMEULEN Olivier
wrote:
> Hello Keith,
>
> Thanks for the quick answer.
> Our target is a cluster of brokers and dispatch-routers.
> To configure it we created a manag
will clear the message store of the orphans.
What problem are you actually trying to solve?
Keith.
On 22 June 2018 at 08:54, VERMEULEN Olivier wrote:
> Hello,
>
> I was running some tests on a Java Broker with 'Memory' configuration store
> and 'DERBY' message store.
>
25, 2018 at 5:48 AM, VERMEULEN Olivier
> wrote:
> > Hello
> >
> > Thanks for your replies
> >
> > @Ganesh, I'm trying to setup the same use case with a more recent
> dispatch-router (1.0.0) but so far I can't even make the
> dispatch-router work. My autolinks
../sbin/qdrouterd: /lib/libk5crypto.so.3: no version information available
(required by /usr/lib64/libssl.so.10)
../sbin/qdrouterd: /lib/libkrb5.so.3: no version information available
(required by /usr/lib64/libssl.so.10)
2018-07-26 11:26:00.388557 +0200 AGENT (warning) Attribute 'source' of
-Original Message-
From: VERMEULEN Olivier
Sent: mercredi 4 juillet 2018 15:36
To: users@qpid.apache.org
Subject: RE: [Broker-J] Reject overflow policy
Hello,
I get the same error as you now...
There must have been something wrong with my use case the first time.
Thanks for the help
Regards,
Alex
On 31 July 2018 at 12:44, VERMEULEN Olivier wrote:
> Hello,
>
> Actually I managed to reproduce my first use case where the client does not
> receive the exception.
> It happens when sending through a topic to a queue with REJECT policy enabled.
> The message is
(i.e. each router has an inter-router listener _and_ connector).
You only need one inter-router connection.
-Ted
On Mon, Jul 23, 2018 at 10:18 AM, VERMEULEN Olivier
wrote:
> Hello,
>
> I started 2 Dispatch-Routers (version 0.7.0) and 1 Broker-J (version
> 7.0.3) My first Dispatch-Rout
Hello,
With a derby message store, when a producer sends a message to the broker do we
have the guarantee that the message is actually flushed on the disk when we
receive the acknowledgement?
Thanks,
Olivier
***
This e-mail contains information for the intended
are committed
for both transactional and non-transactional deliveries.
Do you have any problem resulted in message loss with derby store?
Kind Regards,
Alex
On 9 August 2018 at 15:57, VERMEULEN Olivier
wrote:
> Hello,
>
> With a derby message store, when a producer sends a message to the broker
Hello,
I have a messaging cluster with multiple dispatch-routers and multiple brokers
(Broker-J).
I'm using Qpid JMS for the client and when creating the connection factory I
pass the URIs of all the dispatch-routers to support failover.
The question is, what happens if I want to scale up my
Hello,
I started 2 Dispatch-Routers (version 0.7.0) and 1 Broker-J (version 7.0.3)
My first Dispatch-Router has an out autolink on a topic:
router {
id: router.10104
mode: interior
worker-threads: 4
}
listener {
host : 0.0.0.0
port: 10104
role: normal
Hello,
While working with the Broker-J HTTP management I found some strange behaviors,
especially while binding a queue to an exchange.
* If the exchange does not exist the creation of the binding returns 404
where I would expect a 5XX
* If the queue does not exist the creation of the
2018 17:58
To: users@qpid.apache.org
Subject: Re: [Broker-J] HTTP management
On Wed, 29 Aug 2018 at 17:21, VERMEULEN Olivier
wrote:
> Hello,
>
> While working with the Broker-J HTTP management I found some strange
> behaviors, especially while binding a queue to
Hello,
When downloading the latest version of the dispatch-router from the web page:
https://qpid.apache.org/releases/qpid-dispatch-1.2.0/index.html
I get an artifact that I can't even untar... gzip: stdin: not in gzip format
Note that on the other hand I have no problems when downloading it
Thanks for the quick reply.
I will try with link-routes then.
Olivier
-Original Message-
From: Ted Ross
Sent: mardi 3 juillet 2018 15:48
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] non-destructive consumers
On Tue, Jul 3, 2018 at 9:34 AM, VERMEULEN Olivier
wrote
use cases (not at the same time so
far).
So the question is, can I mix autolinks and linkRoutes in the dispatch-router
configuration?
Thanks,
Olivier
-Original Message-
From: VERMEULEN Olivier
Sent: mercredi 4 juillet 2018 09:23
To: users@qpid.apache.org
Subject: RE: [Dispatch Router
Hello Rudyy
Thanks for the detailed answer.
I'm using a JMS client (qpid-jms-client 0.11.1) with a non-transacted session
and no specific flags.
For this use case I'm publishing directly in a standard queue.
Olivier
-Original Message-
From: Oleksandr Rudyy
Sent: mercredi 4 juillet
logging on client (for example, using env
variable PN_TRACE_FRM=true) and debug logging on Broker side and repeat your
test in order to understand what is happening?
What is reported in both logs on reaching the limit?
Kind Regards,
Alex
On 4 July 2018 at 11:10, VERMEULEN Olivier wrote:
> He
Hello,
We're currently using the version 1.3.0 of the dispatch-router.
We are creating 10 connectors to our broker and 10 autolinks per topic/queue (1
for each connector).
This "connection pool" allows us to greatly improve the performances but we
noticed something strange in the logs.
The
: mercredi 31 octobre 2018 15:42
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] Wrong IDs in the logs?
On Wed, Oct 31, 2018 at 10:30 AM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:
> Hello,
>
> We're currently using the version 1.3.0 of the dispatch-router.
>
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] Wrong IDs in the logs?
Olivier,
How do you specify the connection in your autolinks? Are you using
container-id or connector name?
-Ted
On Wed, Oct 31, 2018 at 10:30 AM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:
&
=in
connection=broker_conn${i}
done
-Original Message-
From: Ganesh Murthy
Sent: vendredi 2 novembre 2018 15:06
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] Wrong IDs in the logs?
On Fri, Nov 2, 2018 at 6:51 AM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:
> Hello,
with JDBC, but my
experience with BDB and batch sizes was similar. If you are happy with the
original patch I'll apply that on trunk from QPID-8242.
-- Rob
On Mon, 29 Oct 2018 at 15:41, VERMEULEN Olivier
wrote:
> Hello Rob,
>
> I modified your patch to set a minimum batch size of 10 but d
On Tue, 2 Oct 2018 at 17:47, VERMEULEN Olivier
wrote:
> Hello Rob,
>
> We tested your fix on an Oracle database and it works fine, we even
> noticed a 7% improvement of the overall throughput!
>
Great!
> But the average size of the batches is quite small: between 3 or
Message -
> From: "VERMEULEN Olivier"
> To: users@qpid.apache.org
> Sent: Tuesday, November 6, 2018 10:57:11 AM
> Subject: RE: [Dispatch Router] Exceptions handling
>
> Another example when I define a max_message_size on the broker and try
> to send a message that is bi
Hello,
I'm using 1 dispatch-router (1.3.0), 2 brokers (7.0.3) and a simple topic/queue
configuration.
I was testing the multicast distribution on the dispatch-router with 1 producer
on the topic and 2 listeners on the queue.
In AUTO_ACK everything seems to work fine but when I switch to
] multicast distribution
On 12/11/18 14:58, VERMEULEN Olivier wrote:
> address {
> prefix: queue_1
> waypoint: yes
> distribution: multicast
> }
This will not give you end to end acknowledgement with consuming clients. The
router will acknowledge each message receiv
name: broker.dell440srv.5673.queue_1
addr: queue_1
connection: broker.dell440srv.5673
direction: in
}
-Original Message-
From: Gordon Sim
Sent: lundi 12 novembre 2018 14:16
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution
On 12/11/18 13:07, V
] multicast distribution
On 12/11/18 19:32, VERMEULEN Olivier wrote:
> when all receivers acknowledge it
How is the set of 'all' receivers defined? All those active at the time the
message is received by the first router? (What if one of them had been
disconnected just before that mess
/11/18 11:59, VERMEULEN Olivier wrote:
> No if the subscriber is still down the message should also be resent.
> So you're right it's not simply the active subscribers, it would be all the
> subscribers that were connected to the dispatch-router at some point and that
> were not expli
:58, VERMEULEN Olivier wrote:
> I guess yes, all the active subscribers when the router receives the message.
> If the message is not successfully acked by all of them, then yes the message
> is requeued and sent again to all active subscribers.
> If one of the subscribers disconnects i
Hello,
I have the following setup : 1 dispatch-router (with defaultDistribution set to
unavailable), 1 broker and 1 JMS client.
When I try to send to a topic that does not exist, while connected to the
broker directly, I get the following exception:
javax.jms.InvalidDestinationException: Could
when all receivers acknowledge it
-Original Message-
From: Gordon Sim
Sent: lundi 12 novembre 2018 17:38
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution
On 12/11/18 16:35, VERMEULEN Olivier wrote:
> So this use case (client acknowledged multic
So this use case (client acknowledged multicast with message routing) will
never be supported ?
-Original Message-
From: Gordon Sim
Sent: lundi 12 novembre 2018 17:20
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution
On 12/11/18 15:50, VERMEULEN Olivier
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Olivier
From: VERMEULEN Olivier
Sent: mardi 6 novembre 2018 16:18
To: users@qpid.apache.org
Subject: [Dispatch Router] Exceptions handling
Hello,
I have the following setup : 1 dispatch
than immediately
>>> commit the message
>>> delete) schedule the message removal to be picked up by the next
>>> commit that the store is asked to perform. This would make the
>>> behaviour more like the BDB store (where we schedule the com
at the moment, but if I get a chance I'll at least try to test
with Derby.
-- Rob
On Wed, 19 Sep 2018 at 09:14, VERMEULEN Olivier
wrote:
> Thanks a lot Rob, I'll try to test that next week.
>
> Olivier
>
> -Original Message-
> From: Rob Godfrey
> Sent: mercredi 19
branch
code and see if this problem goes away?
Thanks.
On Thu, Sep 20, 2018 at 10:22 AM VERMEULEN Olivier <
olivier.vermeu...@murex.com> wrote:
> Here are the files.
>
> Thanks,
> Olivier
>
> -Original Message-
> From: Ganesh Murthy
> Sent: jeudi 20 se
Hello,
We did a test with 1 dispatch-router and 2 brokers.
On the brokers we configured a maximum message size of 10KB.
The use case is the following:
* We send a message above 10 KB: we receive a "failure at remote" exception
from the dispatch-router
* We send a message under 10KB:
add the following to your router configuration file.
log {
module: DEFAULT
enable: trace+
output: qdrouterd.log
}
Also please share your router config file if possible and which broker you are
using to test.
On Thu, Sep 20, 2018 at 8:47 AM VERMEULEN Olivier <
olivier.ver
Hello,
Any feedback on the below proposition?
Thanks,
Olivier
From: VERMEULEN Olivier
Sent: mardi 18 décembre 2018 15:01
To: 'users@qpid.apache.org'
Subject: RE: [Proton-C] Discovery
Hello,
We looked into the proton-c implementation and didn't find anything that would
allow us to implement
-> this is currently not happening
Note that we can do the pull-request but I wanted to run the proposition by you
first.
Thanks,
Olivier
From: VERMEULEN Olivier
Sent: mardi 11 décembre 2018 12:34
To: users@qpid.apache.org
Subject: [Proton-C] Discovery
Hello,
I was looking into the qpid-jms-discov
+1
I ran the Murex test suite with it and everything is green
Olivier
-Original Message-
From: Robbie Gemmell
Sent: lundi 17 décembre 2018 12:47
To: users@qpid.apache.org
Subject: [VOTE] Release Apache Qpid JMS 0.40.0
Hi folks,
I have put together a spin for a 0.40.0 Qpid JMS client
+1
Ran the Murex test suite with Broker-J 7.1.0, Dispatch-Router 1.3.0 and
QPID-JMS 0.39.0
Olivier
-Original Message-
From: Oleksandr Rudyy
Sent: mercredi 19 décembre 2018 18:03
To: users@qpid.apache.org
Subject: [VOTE] Release Qpid Broker-J 7.1.0
Hi folks,
I built release artefacts
Hello,
I was looking into the qpid-jms-discovery project which seems very nice for
what I'm trying to do: update the list of dispatch-routers the client can
connect to during failover (with a custom discovery logic).
I wanted to know if there is something similar with proton-c or at least a way
+1
Ran the Murex test suite with Dispatch-Router 1.5.0, Broker-J 7.0.3 and
QPID-JMS 0.39.0
Olivier
-Original Message-
From: Ganesh Murthy
Sent: mercredi 19 décembre 2018 23:00
To: users@qpid.apache.org
Subject: [VOTE] Release Qpid Dispatch Router 1.5.0 (RC1)
Hello All,
+1, I launched our messaging test suite and everything is green
Olivier
-Original Message-
From: Robbie Gemmell
Sent: mercredi 28 novembre 2018 14:41
To: users@qpid.apache.org
Subject: Re: [VOTE] Release Apache Qpid JMS 0.39.0
On Wed, 28 Nov 2018 at 13:36, Robbie Gemmell wrote:
>
> Hi
Hello,
I wanted to know if it's possible to get the dispatch-router statistics
(especially the connected clients and the memory statistics) through an AMQP
call to $management?
Thanks,
Olivier
***
This e-mail contains information for the intended recipient only. It
Thanks for the quick answer!
-Original Message-
From: Gordon Sim
Sent: vendredi 30 novembre 2018 14:19
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] Get statistics through AMQP management
On 30/11/18 13:02, VERMEULEN Olivier wrote:
> Hello,
>
> I wanted to kno
Hello,
I'm using version 7.0.3 of the Broker-J.
I tried to configure it to use a JDBC (here Sybase) config store.
qpid-server.bat -st JDBC -prop
"systemConfig.connectionUrl=jdbc:sybase:Tds:dell719srv:4100/DB" -prop
"systemConfig.username=USER" -prop
"systemConfig.password=PWD"
But I got the
Original Message-
From: Rob Godfrey
Sent: jeudi 22 novembre 2018 16:41
To: users@qpid.apache.org
Subject: Re: [Broker-J] JDBC config store
On Thu, 22 Nov 2018 at 15:11, VERMEULEN Olivier
wrote:
> Hello,
>
> I'm using version 7.0.3 of the Broker-J.
> I tried to configure it to
can work around the issue by creating your
own initial configuration and overriding type of preferences store in attribute
'preferenceStoreAttributes' to 'Noop'
or 'JSON'.
Kind Regards,
Alex
On Thu, 22 Nov 2018 at 16:53, Rob Godfrey wrote:
>
> On Thu, 22 Nov 2018 at 17:31, VERMEULEN O
you are completely correct, apologies...this is at the
> system config level not broker...
>
> So the only way to pass it in is as a command line parameter -prop
> 'preferenceStoreAttributes=...'
>
> -- Rob
>
>
>
> On Mon, 26 Nov 2018 at 17:55, VERMEULEN Olivier <
> olivier.ve
nymous",
"type" : "Anonymous"
} ],
...
"virtualhostnodes" : [ {
"name" : "default",
"type" : "JSON",
"defaultVirtualHostNode" : "true",
"virtualHostInitialConfiguration" : &q
Hello,
We ran a performance test with a bunch of brokers and an Oracle database to
store the messages.
We noticed that the database was a bit overloaded with commits.
Looking at the logs we saw that sending a message was triggering 1 commit for 3
operations (QPID_QUEUE_ENTRIES,
Thanks Gordon, it works perfectly!
Olivier
-Original Message-
From: Gordon Sim
Sent: lundi 17 septembre 2018 12:23
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] amqp:not-found
On 17/09/18 10:52, VERMEULEN Olivier wrote:
> Hello,
>
> Our messaging topology uses a
sked to perform. This would make the
> behaviour more like the BDB store (where we schedule the commit but
> don't actually force the sync to disk on message deletion).
>
> -- Rob
>
> On Mon, 17 Sep 2018 at 14:55, VERMEULEN Olivier <
> olivier.vermeu...@murex.com> w
Hello,
Our messaging topology uses a dispatch-router in front of a bunch of brokers
(the Java one).
Note that we never use the dispatch-router for direct communication between a
producer and a consumer, everything goes through a broker.
Now when we try to send a message, through the
, 14 Sep 2018 at 15:30, VERMEULEN Olivier
wrote:
> Hello,
>
> We ran a performance test with a bunch of brokers and an Oracle
> database to store the messages.
> We noticed that the database was a bit overloaded with commits.
> Looking at the logs we saw that sending a messag
be removed from the
store, leading to problems when the second consumer tries to read the message.
-- Rob
On Mon, 17 Sep 2018 at 11:39, VERMEULEN Olivier
wrote:
> Hello Rob,
>
> Thanks for the answer.
> I started looking at the code to see if there is something I can do
> about
> > Setting the jms.receiveLocalOnly URI option true would stop it draining the
> > link and so I guess let it return null instead of waiting for the failover
> > process to complete.
> >
> > I dont think I'd ever choose to throw from the consumer there,
> > a
; > > Subject: Re: [QPID JMS] receive with timeout and reconnect
> > >
> > > I guess it is probably blocking on beginning an attempt to drain the link
> > > credit as way to verify no messages before returning null.
> > > Setting the jms.receiveLocalO
Hello,
We're using QPID JMS 0.39.0 with a set of reconnect options that makes the
client retry to connect for 2 hours in case of problem.
When doing a synchronous receive call with a smaller timeout (like 60 seconds)
we were expecting to receive a TimeOutException after 60 seconds but we
; I was able to reproduce the problem using your script. I will raise a JIRA
> and fix the issue.
>
> Thanks.
>
> On Fri, Nov 2, 2018 at 1:30 PM VERMEULEN Olivier <
> olivier.vermeu...@murex.com> wrote:
>
>> Actually we reproduced without the JMS client:
>>
>> #
+1
Launched the Murex test suite with
QPID Broker-J 7.1.1
QPID Dispatch-Router 1.5.0
QPID JMS 0.39.0
Olivier
-Original Message-
From: Oleksandr Rudyy
Sent: lundi 25 février 2019 15:00
To: users@qpid.apache.org
Subject: Re: [VOTE] Release Qpid Broker-J 7.1.1
Casting my vote explicitly
Great, exactly what I was looking for.
Thanks a lot !
Olivier
-Original Message-
From: Keith W
Sent: jeudi 7 février 2019 19:33
To: users@qpid.apache.org
Subject: Re: [Broker-J] non persistent messages
Olivier
On Thu, 7 Feb 2019 at 12:47, VERMEULEN Olivier
wrote:
>
>
AMQP 1.0 persistent messages into
memory store.
Persisting of AMQP 0-x messages into memory store works straight away.
Out of curiosity, what is you messaging use case? Why do you need to use Memory
message store?
Kind Regards,
Alex
On Thu, 7 Feb 2019 at 10:20, VERMEULEN Olivier
wrote:
>
> with that component. (Alan, Cliff, Andrew?)
> > >> > >
> > >> > >
> > >> > Just to add some detail to what Gordon said - in C there is no
> > reconnect
> > >> > support out-of-the-box but you have the tools to implement
Hello,
I have a question regarding the use of multicast with the dispatch-router.
I have 2 dispatch-routers and 1 broker with a topic/queue configuration.
All my addresses are configured as waypoints and the ones for the queue are
also configured as multicast.
When I put 2 listeners on the
+1
Ran the Murex test suite and all is green.
Olivier
-Original Message-
From: Oleksandr Rudyy
Sent: jeudi 9 mai 2019 00:24
To: users@qpid.apache.org
Subject: [VOTE] Release Qpid Broker-J 7.1.3
Hi,
I built release artefacts for Qpid Broker-J version 7.1.3 RC1.
Please, give them a
age -
> From: "VERMEULEN Olivier"
> To: users@qpid.apache.org
> Sent: Wednesday, April 17, 2019 10:10:49 AM
> Subject: RE: [Dispatch-router] Multicast
>
> Hello Gordon,
>
> Thanks for your help.
> After struggling for a few hours I also managed to make my use ca
[1,0] receiver link_24 (source: "myQueue", target: none, class: client)
But when a producer attaches to a topic I only get the ID:
attach [1,0] sender link_25 (source:
"ID:c5eab952-81b1-4a55-ad68-8ac277249cd7:1:1:1", target: none, class: client)
Thanks for the tool!
Olivier
-
not yet registered in the dispatch-routers
when the message was sent.
Olivier
-Original Message-
From: Gordon Sim
Sent: mercredi 17 avril 2019 12:39
To: users@qpid.apache.org
Subject: Re: [Dispatch-router] Multicast
On 17/04/2019 8:50 am, VERMEULEN Olivier wrote:
> I have a quest
Hello,
First, thank you guys for the quick reply/analysis.
We should be able to unblock our client by upgrading their dispatch-router
version to 1.9.0.
Now this is only a short term solution.
Indeed if DISPATCH-1423 makes it the master we won't be able to upgrade to
newer versions of the
ly reported using some sort of external monitoring tool.
>
> -- Rob
>
>
>>
>> Kind Regards,
>> Alex
>>
>> On Fri, 15 Nov 2019 at 16:31, VERMEULEN Olivier <
>> olivier.vermeu...@murex.com>
>> wrote:
>>
>> > Hello,
>>
Hello,
We're using the Dispatch-Router 1.9.0.
Is there a way, using the AMQP management of the Dispatch-Router, to retrieve
statistics like the CPU consumption?
Thanks,
Olivier
***
This e-mail contains information for the intended recipient only. It may
contain
Hello,
Regarding the multicast , what will be the behavior in 1.10.0 if all *present*
subscribers do not return the same acknowledgment?
Thanks,
Olivier
-Original Message-
From: Ken Giusti
Sent: jeudi 7 novembre 2019 17:11
To: users
Subject: Re: multicast without consumers
On Thu,
Hello,
We're using the Broker-J 7.1.3.
Looking at the statistics provided by /api/latest/broker/getStatistics, I don't
see anything related to the CPU consumption.
Did I miss something or this information is not provided?
Thanks,
Olivier
***
This e-mail contains
+1
Launched our validation pipeline which includes:
- basic sends and receives
- basic routing and filtering
- JDBC message and config stores
- message recovery
- HTTP management and statistics
- TTL, max queue size and max message size
- SSL and SASL
- performance benchmark
Olivier
Hello,
Yes the waypoint address (in from broker) is using a multicast distribution.
Unfortunately skipping the broker is not an option for us right now.
Our whole architecture relies on the broker to guarantee that no messages will
ever be lost...
For information we're asking for a quick
1 - 100 of 121 matches
Mail list logo