In my RabbitMQ implementation the producer (client) connects to the
broker, optionally to an exchange, and always creates a private reply queue
for RPC replies. The consumer (server) connects to the broker, optionally
to an exchange, and always creates a request queue. I have the message
producer adding a standard AMQP "reply-to" to the request indicating the
queue name to reply to (for non-oneway requests - if it's a oneway request
then should not get a reply-to). That's sufficient for message routing the
reply back to the producer. The problem is on the consumer side, where I
can receive a message in read_virt, but then I have to tuck the reply-to
away and currently make it an error if another message reads before the
response goes out because I don't know the identity of the request/response.
So I am looking for the best place I could have the consumer/server side
read in a complete message, figure out the request id, then call readEnd
with that ID. This would give the transport the ability to tuck something
away to handle the reply on message oriented transports. Then when
replying, call writeEnd with the same ID. Then the transport can look up
the ID, find the correct reply-to queue, and send out the reply.
In addition to this, the ability to tell the transport that an incoming
request is oneway is important.
If the consumer (server) gets a oneway call, it still should ack the
If the call is a roundtrip RPC, it should not ack the message until it is
ready to send the reply.
Further for the HTTP transport in C++, we could fix it up so that oneway
requests still generate a response (HTTP/1.1 200) immediately. This would
fix some issues in the C++ side. If the request is roundtrip, you wait to
send the header response until it succeeds or fails.
I have it working round trip if a "simple" server is used, which means the
transport doesn't need to keep track of multiple requests, so it doesn't
need the extra logic discussed above. However that's not terribly useful.
On Sat, Jul 7, 2018 at 7:32 AM Jens Geyer <jensge...@hotmail.com> wrote:
> > Thrift transport (at least in C++) does not have a concept of a
> > transport hint that could carry through the processor for each request.
> Correct. Because nobody needs it for pure and simple RPC & serialization
> which is the scope where Thrift operates. There is no need to carry a
> correlation ID around unless you need it. We than make it part of the
> messages we send back and fort. I would love to use THeader for stuff like
> that, but it's only available for C++ right now.
> > I am working on Message Bus support for Apache Thrift. Currently I am
> > the middle of developing RabbitMQ support and then I will be looking
> > Kafka.
> Specifically, with Rabbit and 0MQ and ActiveMQ I dealt with a while ago, I
> aloy played with some other systems like MSMQ and Rebus (a NService Bus
> clone). But before we got down to the matter, could you share an idea
> the generalities of the implementation idea?
> Have fun,
> -----Ursprüngliche Nachricht-----
> From: James E. King III
> Sent: Friday, July 6, 2018 3:54 PM
> To: firstname.lastname@example.org
> Subject: Transport Hints - Message identity and routing
> I am working on Message Bus support for Apache Thrift. Currently I am in
> the middle of developing RabbitMQ support and then I will be looking into
> Kafka. I found that in order to receive requests and route replies
> properly, I need a way to identify where the request came from when sending
> the reply. Thrift transport (at least in C++) does not have a concept of a
> transport hint that could carry through the processor for each request.
> These hints would be entirely opaque and only useful or understandable to
> the layer that inserted them. They also would not likely be useful to
> stream based transports, but I believe they are necessary for message-based
> transports to work properly. There's a bit of an impedance mismatch there,
> however any use of a message bus really requires framed transport which
> separates messages anyway.
> This is something I will be looking into soon, so if anyone has any
> thoughts about it please let me know.
> - Jim