Re: [Dispatch Router] multicast distribution

2018-11-12 Thread Gordon Sim

On 12/11/18 19:32, VERMEULEN Olivier wrote:

when all receivers acknowledge it


How is the set of 'all' receivers defined? All those active at the time 
the message is received by the first router? (What if one of them had 
been disconnected just before that message?)


If a message is not successfully acked, would it be requeued and 
replayed to all subscribers?


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: Python Proton - Multiple consumers on dynamic queue

2018-11-12 Thread Gordon Sim

On 12/11/18 20:07, gsupa wrote:

Hi All,

I'm working on migrating a python2 app to python3, which also means
transitioning to Proton. The application is communicating with a c++ qpid
broker.

*One of the last roadblocks I'm having in this transition is figuring out
how to dynamically generate a queue with bindings hooked up.* The current
application will create a queue with x-declare and x-bindings, and can scale
with more consumers. These new consumers will connect to this existing queue
(or create if it does not exist) and messages on this queue will be
distributed across these consumers. Additionally, if there are no consumers,
the queue will be automatically deleted, and messages sent to the exchange
will be routed to an alternate exchange.

Old usage:
{ create: always, node: { type: queue, durable: False, x-declare: {
exclusive: False, auto-delete: True, alternate-exchange: e_fed_example },
x-bindings: [ { exchange: 'e_example' } ] }, link: { durable: False } }

(The actual exchange e_example also has alternate-exchange defined)

Some possible solutions I've evaluated:
1) I could dynamically create a receiver on the exchange for each consumer,
but this would mean messages duplicated to all consumers, instead of being
distributed.
2) The broker has an option for queues with a certain pattern to be created
on demand. This allows our consumers to connect to the same queue, and the
queue will auto-delete if all consumers disconnect (allowing for the
alternate exchange on the exchange to kick in). However, the queue is not
bound to an exchange, which isn't helpful.


You can also define a 'topic', which is basically an exchange and the 
queue settings to be used for subscriptions to it.



3) I've seen a c++ example which utilizes 'shared' 'capabilities' when
creating a connection which would theoretically allow new receivers to
connect to the same queue, but I haven't seen any documentation on this for
Python.


https://github.com/grs/amqp_subscriptions/blob/master/e.py#L41 is an 
example of specifying a shared capability (it uses a small utility class 
which is also in that repo).




-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Python Proton - Multiple consumers on dynamic queue

2018-11-12 Thread gsupa
Hi All,

I'm working on migrating a python2 app to python3, which also means
transitioning to Proton. The application is communicating with a c++ qpid
broker.

*One of the last roadblocks I'm having in this transition is figuring out
how to dynamically generate a queue with bindings hooked up.* The current
application will create a queue with x-declare and x-bindings, and can scale
with more consumers. These new consumers will connect to this existing queue
(or create if it does not exist) and messages on this queue will be
distributed across these consumers. Additionally, if there are no consumers,
the queue will be automatically deleted, and messages sent to the exchange
will be routed to an alternate exchange.

Old usage:
{ create: always, node: { type: queue, durable: False, x-declare: {
exclusive: False, auto-delete: True, alternate-exchange: e_fed_example },
x-bindings: [ { exchange: 'e_example' } ] }, link: { durable: False } }

(The actual exchange e_example also has alternate-exchange defined)

Some possible solutions I've evaluated:
1) I could dynamically create a receiver on the exchange for each consumer,
but this would mean messages duplicated to all consumers, instead of being
distributed.
2) The broker has an option for queues with a certain pattern to be created
on demand. This allows our consumers to connect to the same queue, and the
queue will auto-delete if all consumers disconnect (allowing for the
alternate exchange on the exchange to kick in). However, the queue is not
bound to an exchange, which isn't helpful.
3) I've seen a c++ example which utilizes 'shared' 'capabilities' when
creating a connection which would theoretically allow new receivers to
connect to the same queue, but I haven't seen any documentation on this for
Python.

All advice is greatly appreciated.

Thanks,
Griffin



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



RE: [Dispatch Router] multicast distribution

2018-11-12 Thread VERMEULEN Olivier
when all receivers acknowledge it

-Original Message-
From: Gordon Sim 
Sent: lundi 12 novembre 2018 17:38
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution

On 12/11/18 16:35, VERMEULEN Olivier wrote:
> So this use case (client acknowledged multicast with message routing) will 
> never be supported ?

What is you intention? That the message be dequeued when any receiver 
acknowledges it? Or only when all receivers acknowledge it? Or that each 
receiver independently acknowledges, and the message will be resent only those 
receivers that did not acknowledge?

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

*** This e-mail contains information for the 
intended recipient only. It may contain proprietary material or confidential 
information. If you are not the intended recipient you are not authorized to 
distribute, copy or use this e-mail or any attachment to it. Murex cannot 
guarantee that it is virus free and accepts no responsibility for any loss or 
damage arising from its use. If you have received this e-mail in error please 
notify immediately the sender and delete the original email received, any 
attachments and all copies from your system.


Re: [VOTE] Release Apache Qpid JMS 0.38.0

2018-11-12 Thread Robbie Gemmell
On Mon, 12 Nov 2018 at 18:43, Robbie Gemmell  wrote:
>
> Hi folks,
>
> I have put together a spin for a 0.38.0 Qpid JMS client release,
> please give it a test out and vote accordingly.
>
> The source and binary archives can be grabbed from:
> https://dist.apache.org/repos/dist/dev/qpid/jms/0.38.0-rc1/
>
> The maven artifacts are also staged for now at:
> https://repository.apache.org/content/repositories/orgapacheqpid-1161
>
> The JIRAs assigned are:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314524=12344251
>
> Regards,
> Robbie
>
> P.S. If you want to test it out using maven (e.g with the examples
> src, or your own things), you can temporarily add this to your poms to
> access the staging repo:
>
>   
> 
>   staging
>   
> https://repository.apache.org/content/repositories/orgapacheqpid-1161
> 
>   
>
> The dependency for the client itself would then be:
>
>   
> org.apache.qpid
> qpid-jms-client
> 0.38.0
>   

+1

I checked things out as follows:
- Verified the signature and checksum files.
- Checked the LICENCE and NOTICE files in the archives.
- Ran mvn apache-rat:check to check licence headers in source archive.
- Ran the source build and tests.
- Built Qpid Broker-J master using the staging repo and ran the systests.
- Built ActiveMQ 5.x master using the staging repo and ran the AMQP tests.
- Built ActiveMQ Artemis master using the staging repo, ran the AMQP tests
  (3 tests failed, but did the same before the change already).
- Built and ran the HelloWorld example from the binary archive.

Robbie

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



[VOTE] Release Apache Qpid JMS 0.38.0

2018-11-12 Thread Robbie Gemmell
Hi folks,

I have put together a spin for a 0.38.0 Qpid JMS client release,
please give it a test out and vote accordingly.

The source and binary archives can be grabbed from:
https://dist.apache.org/repos/dist/dev/qpid/jms/0.38.0-rc1/

The maven artifacts are also staged for now at:
https://repository.apache.org/content/repositories/orgapacheqpid-1161

The JIRAs assigned are:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12314524=12344251

Regards,
Robbie

P.S. If you want to test it out using maven (e.g with the examples
src, or your own things), you can temporarily add this to your poms to
access the staging repo:

  

  staging
  
https://repository.apache.org/content/repositories/orgapacheqpid-1161

  

The dependency for the client itself would then be:

  
org.apache.qpid
qpid-jms-client
0.38.0
  

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [Dispatch Router] multicast distribution

2018-11-12 Thread Gordon Sim

On 12/11/18 16:35, VERMEULEN Olivier wrote:

So this use case (client acknowledged multicast with message routing) will 
never be supported ?


What is you intention? That the message be dequeued when any receiver 
acknowledges it? Or only when all receivers acknowledge it? Or that each 
receiver independently acknowledges, and the message will be resent only 
those receivers that did not acknowledge?


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: qpid-cpp-0.35 errors

2018-11-12 Thread rammohan ganapavarapu
Kim,

Thanks for clearing that up for me, does it support SAN storage blocks.
Where can i read more about linearstore if i want to know the low level
internals?

Ram

On Mon, Nov 12, 2018 at 8:32 AM Kim van der Riet 
wrote:

> The linearstore relies on using libaio for its async disk writes. The
> O_DIRECT flag is used, and this requires a block of aligned memory to
> serve as a memory buffer for disk write operations. To my knowledge,
> this technique only works with local disks and controllers. NFS does not
> allow for DMA memory writes to disk AFAIK, and for as long as I can
> remember, has been a problem for the linearstore. With some work it
> might be possible to make it work using another write technique though.
> NFS has never been a "supported" medium for linearstore.
>
> On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:
> > But how does NFS will cause this issue, i am interested to see because we
> > are using NFS (V4 version) in some environments, so wanted to learn
> tunings
> > when we use NFS.
> >
> > Thanks,
> > Ram
> >
> > On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
> > rammohanga...@gmail.com> wrote:
> >
> >> Sorry, i thought it's NFS but it's actually SAN storage volume.
> >>
> >> Thanks,
> >> Ram
> >>
> >> On Fri, Nov 9, 2018, 2:10 AM Gordon Sim  >>
> >>> On 08/11/18 16:56, rammohan ganapavarapu wrote:
>  I was wrong about the NFS for qpid journal files, looks like they are
> on
>  NFS, so does NFS cause this issue?
> >>> Yes, I believe it does. What version of NFS are you using?
> >>>
> >>> -
> >>> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> >>> For additional commands, e-mail: users-h...@qpid.apache.org
> >>>
> >>>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
> For additional commands, e-mail: users-h...@qpid.apache.org
>
>


RE: [Dispatch Router] multicast distribution

2018-11-12 Thread VERMEULEN Olivier
So this use case (client acknowledged multicast with message routing) will 
never be supported ?

-Original Message-
From: Gordon Sim 
Sent: lundi 12 novembre 2018 17:20
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution

On 12/11/18 15:50, VERMEULEN Olivier wrote:
> Would it also work if the queue is sharded across multiple brokers?
> Will my consumers receive the messages stored in all the brokers for a given 
> queue?

No, each link would be routed to one broker.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

*** This e-mail contains information for the 
intended recipient only. It may contain proprietary material or confidential 
information. If you are not the intended recipient you are not authorized to 
distribute, copy or use this e-mail or any attachment to it. Murex cannot 
guarantee that it is virus free and accepts no responsibility for any loss or 
damage arising from its use. If you have received this e-mail in error please 
notify immediately the sender and delete the original email received, any 
attachments and all copies from your system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: qpid-cpp-0.35 errors

2018-11-12 Thread Kim van der Riet
The linearstore relies on using libaio for its async disk writes. The 
O_DIRECT flag is used, and this requires a block of aligned memory to 
serve as a memory buffer for disk write operations. To my knowledge, 
this technique only works with local disks and controllers. NFS does not 
allow for DMA memory writes to disk AFAIK, and for as long as I can 
remember, has been a problem for the linearstore. With some work it 
might be possible to make it work using another write technique though. 
NFS has never been a "supported" medium for linearstore.


On 11/9/18 4:28 PM, rammohan ganapavarapu wrote:

But how does NFS will cause this issue, i am interested to see because we
are using NFS (V4 version) in some environments, so wanted to learn tunings
when we use NFS.

Thanks,
Ram

On Fri, Nov 9, 2018 at 6:48 AM rammohan ganapavarapu <
rammohanga...@gmail.com> wrote:


Sorry, i thought it's NFS but it's actually SAN storage volume.

Thanks,
Ram

On Fri, Nov 9, 2018, 2:10 AM Gordon Sim 
On 08/11/18 16:56, rammohan ganapavarapu wrote:

I was wrong about the NFS for qpid journal files, looks like they are on
NFS, so does NFS cause this issue?

Yes, I believe it does. What version of NFS are you using?

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org




-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [Dispatch Router] multicast distribution

2018-11-12 Thread Gordon Sim

On 12/11/18 15:50, VERMEULEN Olivier wrote:

Would it also work if the queue is sharded across multiple brokers?
Will my consumers receive the messages stored in all the brokers for a given 
queue?


No, each link would be routed to one broker.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



RE: [Dispatch Router] multicast distribution

2018-11-12 Thread VERMEULEN Olivier
Would it also work if the queue is sharded across multiple brokers?
Will my consumers receive the messages stored in all the brokers for a given 
queue?

-Original Message-
From: Gordon Sim 
Sent: lundi 12 novembre 2018 16:09
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution

On 12/11/18 14:58, VERMEULEN Olivier wrote:
> address {
>  prefix: queue_1
>  waypoint: yes
>  distribution: multicast
> }

This will not give you end to end acknowledgement with consuming clients. The 
router will acknowledge each message received by the queue on receipt, causing 
the message to be dequeued.

One option is to use link routing for topics. That establishes a distinct 
association between the broker and the consumer, which is needed for 
acknowledged transfer.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

*** This e-mail contains information for the 
intended recipient only. It may contain proprietary material or confidential 
information. If you are not the intended recipient you are not authorized to 
distribute, copy or use this e-mail or any attachment to it. Murex cannot 
guarantee that it is virus free and accepts no responsibility for any loss or 
damage arising from its use. If you have received this e-mail in error please 
notify immediately the sender and delete the original email received, any 
attachments and all copies from your system.


Re: [Dispatch Router] multicast distribution

2018-11-12 Thread Gordon Sim

On 12/11/18 14:58, VERMEULEN Olivier wrote:

address {
 prefix: queue_1
 waypoint: yes
 distribution: multicast
}


This will not give you end to end acknowledgement with consuming 
clients. The router will acknowledge each message received by the queue 
on receipt, causing the message to be dequeued.


One option is to use link routing for topics. That establishes a 
distinct association between the broker and the consumer, which is 
needed for acknowledged transfer.


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



RE: [Dispatch Router] multicast distribution

2018-11-12 Thread VERMEULEN Olivier
I tried to reduce the test case as much as possible.
I'm sending a message with the following code:

Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createTopic("topic_1");
MessageProducer producer = session.createProducer(destination);
TextMessage message = session.createTextMessage("Hello");
producer.send(message);
session.close();

And I'm receiving it with:

Session session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Queue subscription = session.createQueue("queue_1");
MessageConsumer consumer = session.createConsumer(subscription);
TextMessage message = (TextMessage) consumer.receiveNoWait();
System.out.println("Message received: " + message.getText());
//message.acknowledge();
session.close();

Note that I commented out the acknowledgement.
So I was expecting the same result if I launch this code twice.
But actually the second time I'm receiving NULL.

Here is the dispatch-router config:

router {
id: router.dell440srv.10104
mode: interior
workerThreads: 4
defaultDistribution: unavailable
}
listener {
host : 0.0.0.0
port: 10104
role: normal
saslMechanisms: ANONYMOUS
requireSsl: no
authenticatePeer: no
}
log {
module: DEFAULT
enable: info+
includeSource: false
outputFile: log/qpid.log
}
connector {
name: broker.dell440srv.5673
host: dell440srv
port: 5673
role: route-container
}
address {
prefix: topic_1
waypoint: yes
distribution: balanced
}
address {
prefix: queue_1
waypoint: yes
distribution: multicast
}
autoLink {
name: broker.dell440srv.5673.topic_1
addr: topic_1
connection: broker.dell440srv.5673
direction: out
}
autoLink {
name: broker.dell440srv.5673.queue_1
addr: queue_1
connection: broker.dell440srv.5673
direction: in
}



-Original Message-
From: Gordon Sim 
Sent: lundi 12 novembre 2018 14:16
To: users@qpid.apache.org
Subject: Re: [Dispatch Router] multicast distribution

On 12/11/18 13:07, VERMEULEN Olivier wrote:
> Hello,
>
> I'm using 1 dispatch-router (1.3.0), 2 brokers (7.0.3) and a simple 
> topic/queue configuration.
> I was testing the multicast distribution on the dispatch-router with 1 
> producer on the topic and 2 listeners on the queue.
> In AUTO_ACK everything seems to work fine but when I switch to CLIENT_ACK I 
> have a strange behavior.
> If I don't acknowledge the message in my listeners I would expect the message 
> to go back to the broker (which is what I observed with the balanced 
> distribution) but instead the message is just gone...
> Do you know if there is a bug with the multicast distribution? Or did I 
> misconfigure something?

I'm not clear on what your configuration is exactly, but multicast distribution 
in the router is not end-to-end acknowledged (it can't be, as there may be 
multiple receivers). The router that first handles the messages will 
acknowledge it to the producer.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org For additional 
commands, e-mail: users-h...@qpid.apache.org

*** This e-mail contains information for the 
intended recipient only. It may contain proprietary material or confidential 
information. If you are not the intended recipient you are not authorized to 
distribute, copy or use this e-mail or any attachment to it. Murex cannot 
guarantee that it is virus free and accepts no responsibility for any loss or 
damage arising from its use. If you have received this e-mail in error please 
notify immediately the sender and delete the original email received, any 
attachments and all copies from your system.

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



Re: [Dispatch Router] multicast distribution

2018-11-12 Thread Gordon Sim

On 12/11/18 13:07, VERMEULEN Olivier wrote:

Hello,

I'm using 1 dispatch-router (1.3.0), 2 brokers (7.0.3) and a simple topic/queue 
configuration.
I was testing the multicast distribution on the dispatch-router with 1 producer 
on the topic and 2 listeners on the queue.
In AUTO_ACK everything seems to work fine but when I switch to CLIENT_ACK I 
have a strange behavior.
If I don't acknowledge the message in my listeners I would expect the message 
to go back to the broker (which is what I observed with the balanced 
distribution) but instead the message is just gone...
Do you know if there is a bug with the multicast distribution? Or did I 
misconfigure something?


I'm not clear on what your configuration is exactly, but multicast 
distribution in the router is not end-to-end acknowledged (it can't be, 
as there may be multiple receivers). The router that first handles the 
messages will acknowledge it to the producer.


-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



[Dispatch Router] multicast distribution

2018-11-12 Thread VERMEULEN Olivier
Hello,

I'm using 1 dispatch-router (1.3.0), 2 brokers (7.0.3) and a simple topic/queue 
configuration.
I was testing the multicast distribution on the dispatch-router with 1 producer 
on the topic and 2 listeners on the queue.
In AUTO_ACK everything seems to work fine but when I switch to CLIENT_ACK I 
have a strange behavior.
If I don't acknowledge the message in my listeners I would expect the message 
to go back to the broker (which is what I observed with the balanced 
distribution) but instead the message is just gone...
Do you know if there is a bug with the multicast distribution? Or did I 
misconfigure something?

Thanks,
Olivier
*** This e-mail contains information for the 
intended recipient only. It may contain proprietary material or confidential 
information. If you are not the intended recipient you are not authorized to 
distribute, copy or use this e-mail or any attachment to it. Murex cannot 
guarantee that it is virus free and accepts no responsibility for any loss or 
damage arising from its use. If you have received this e-mail in error please 
notify immediately the sender and delete the original email received, any 
attachments and all copies from your system.


[SECURITY] [CVE-2018-17187] Apache Qpid Proton-J transport TLS wrapper hostname verification mode not implemented

2018-11-12 Thread Robbie Gemmell
CVE-2018-17187: Apache Qpid Proton-J transport TLS wrapper hostname
verification mode not implemented

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected: Versions 0.3 to 0.29.0

Description:
The Proton-J transport includes an optional wrapper layer to perform TLS,
enabled by use of the 'transport.ssl(...)' methods. Unless a verification
mode was explicitly configured, client and server modes previously defaulted
as documented to not verifying a peer certificate, with options to
configure this explicitly or select a certificate verification mode with or
without hostname verification being performed.

The latter hostname verifying mode was not previously implemented, with
attempts to use it resulting in an exception. This left only the option to
verify the certificate is trusted, leaving such a client vulnerable to
Man In The Middle (MITM) attack.

Uses of the Proton-J protocol engine which do not utilise the optional
transport TLS wrapper are not impacted, e.g. usage within Qpid JMS.

Resolution:
Uses of Proton-J utilising the optional transport TLS wrapper layer that
wish to enable hostname verification must be upgraded to version 0.30.0 or
later and utilise the VerifyMode#VERIFY_PEER_NAME configuration, which is
now the default for client mode usage unless configured otherwise.

Mitigation:
If upgrading is not currently possible then potential workarounds include
providing a custom SSLContext which enables hostname verification, or
omitting use of the 'transport.ssl(...)' methods and performing TLS through
other means such as utilising existing IO framework support or supplying a
custom transport wrapper layer.

Credit:
This issue was reported by Peter Stockli of Alphabot Security.

References:
[1] https://issues.apache.org/jira/browse/PROTON-1962
[2] https://qpid.apache.org/cves/CVE-2018-17187.html

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org



[ANNOUNCE] Apache Qpid Proton-J 0.30.0 released

2018-11-12 Thread Robbie Gemmell
The Apache Qpid (http://qpid.apache.org) community is pleased to announce
the immediate availability of Apache Qpid Proton-J 0.30.0.

Apache Qpid Proton-J is a messaging library for the Advanced Message Queuing
Protocol 1.0 (AMQP 1.0, ISO/IEC 19464, http://www.amqp.org). It can be used
in a wide range of messaging applications including brokers, clients,
routers, bridges, proxies, and more.

The release is available now from our website:
http://qpid.apache.org/download.html

Binaries are also available via Maven Central:
http://qpid.apache.org/maven.html

Release notes can be found at:
http://qpid.apache.org/releases/qpid-proton-j-0.30.0/release-notes.html

Thanks to all involved,
Robbie

-
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org