This post is regarding the initial title of the thread. Kindly correct me
if I am wrong. I believe the objects of Qpid are not really thread safe.
This is because I tried the following code . The following method is run in
3 separate parallel threads
void SomeClass::SomeMethod()
{
connection = boost::make_shared <Connection>("127.0.0.1");
connection->open();
session = boost::make_shared
<Session>(connection->createSession());
receiver = boost::make_shared
<Receiver>(session->createReceiver(address_));
...
}
Here connection , session and receiver are instance members of someclass
and should not have been affected if they were thread safe however this was
not the case and I would get an error when I would attempt to open a
connection. The error that I would get would be: *"No protocol received
closing"* . I resolved this problem by introducing a lock on this method.
Could anyone kindly tell me if I did the right thing here.
On Fri, Jun 14, 2013 at 2:19 PM, Rajesh Khan <[email protected]>wrote:
> A question well asked. I am also curious will distributing the load of a
> certain queue over multliple queue each with its own receiver enhance
> performance ?
>
>
>
> On Fri, Jun 14, 2013 at 1:57 PM, Kerry Bonin <[email protected]> wrote:
>
>> I was just considering wasting a bunch of time (building test frameworks)
>> to figure out an answer to this same question - I'm hoping someone has a
>> good answer!
>>
>>
>> On Fri, Jun 14, 2013 at 1:22 PM, Connor Poske <
>> [email protected]> wrote:
>>
>> > I have a tangential question(s) regarding performance under load.
>> >
>> > If one wanted to "scale out" a client receiver so as to achieve the
>> > maximum possible throughput on a multi-cored machine, would it be
>> optimum
>> > to go with 1 thread per Session? Another way of asking would be this:
>> > Between Connections, Sessions, and Receivers, which should you scale
>> > horizontally in a multi threaded fashion in order to achieve optimum
>> > performance? Has anyone done any load testing to determine what the
>> optimum
>> > configuration is?
>> >
>> > It may make sense to consider end-to-end optimum configurations also.
>> The
>> > question could also be: If you really wanted to get as much data as
>> > possible per second from Sender A through Broker B to Receiver C,
>> > maximizing multi-cored hardware, what would that look like? Does having
>> > multiple Senders, Receivers, Sessions, Connections set up in a
>> > multi-threaded way even yield a benefit over just stuffing everything
>> > through one pipe? Should we define queues per Sender/Receiver thread
>> pair
>> > at the broker or does that not make a difference in performance?
>> >
>> > I hope this makes sense!
>> >
>> > Thanks,
>> > Connor
>> >
>> > ________________________________________
>> > From: Gordon Sim [[email protected]]
>> > Sent: Friday, June 14, 2013 1:35 AM
>> > To: [email protected]
>> > Subject: Re: Is the class Receiver in QPID thread safe
>> >
>> > On 06/14/2013 12:41 AM, Rajesh Khan wrote:
>> > > While going through some QPID code I wanted to know if the Receiver
>> Class
>> > > in QPID is thread safe (i.e) Any drawbacks if multiple threads access
>> it
>> > at
>> > > the same time ?
>> >
>> > It is intended to be threadsafe. Occasionally of course there are bugs,
>> > e.g. https://issues.apache.org/jira/browse/QPID-4786 or
>> > https://issues.apache.org/jira/browse/QPID-4764.
>> >
>> > However, I'm not convinced there is generally a great deal of benefit in
>> > having multiple threads fetching from the same receiver (or indeed the
>> > same session). I myself would probably aim for a design that had a
>> > thread per session, unless there was a clear reason not to.
>> >
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: [email protected]
>> > For additional commands, e-mail: [email protected]
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: [email protected]
>> > For additional commands, e-mail: [email protected]
>> >
>> >
>>
>
>